Two primary approaches to pruning neural networks are Iterative Pruning and One-Step Pruning, each offering unique strategies for achieving model sparsity while preserving accuracy. Iterative pruning progressively trims model parameters while retaining accuracy, employing multiple iterations to achieve the desired sparsity level. In contrast, One-Step Pruning rapidly identifies and fine-tunes the most promising subnetwork, making it an efficient choice for achieving model sparsity with high potential accuracy in a single step.
A comparison of these two methods is shown in the following table.
Criteria | Iterative Pruning | One-step Pruning |
---|---|---|
Prerequisites | - | BatchNormalization in network |
Time taken | More than one-step pruning | Less than iterative pruning |
Retraining requirement | Required | Required |
Code organization | Evaluation function | Evaluation function Calibration function |