Call sparse_model()
to get a sparse model.
This method finds all the nn.Conv2d
/ nn.ConvTranspose2d
and nn.BatchNorm2d
modules, then replaces those modules with DynamicConv2d
/ DynamicConvTranspose2d
and DynamicBatchNorm2d
. This method replaces the nn.Conv2d
/ nn. linear
layers that meet
the sparsity condition with SparseConv2d
/ SparseLinear
.
This method supports nn.Conv2d
/ nn. linear
weights and activations for simultaneous
pruning. The sparsity of activations can be 0 or 0.5. When the sparsity of activations
is 0, the sparsity of weights can be 0, 0.5, or 0.75. When the sparsity of activations
is 0.5, the sparsity of weights can only be 0.75. block_size is the number of
consecutive elements of the input channel / Channel unfolded according to the weights /
activation. It usually set to 4 or 8 or 16. So, the convolution with weight of input
channel greater than 16 is replaced with sparse convolution.
sparse_model = sparse_pruner.sparse_model(w_sparsity=0.5,a_sparsity=0,block_size=4)
Retraining the sparse model is the same as training a baseline model. Knowledge distillation can achieve better accuracy.