neuralmagic / sparseml

Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
Apache License 2.0
2.07k stars 148 forks source link

Permanently Removing Pruned Weights #553

Closed vjsrinivas closed 2 years ago

vjsrinivas commented 2 years ago

Hi, thanks for the well-organized repository.

I've been following the classification tutorial that prunes and finetunes ResNet50 for Imagenette. The pruning seemed to have worked and both PTH and ONNX files were saved. I noticed that the weight size doesn't actual decrease from the original unpruned model. I'm assuming the pruned weights are zeroed out? Is there a utility to actually remove the neurons with zeroed weights in PyTorch?

clementpoiret commented 2 years ago

I reply mainly to have the answer to your question too. My guess is that this behavior is handled by the leave_enabled argument in the recipe, which is important mostly if you want to perform additional things like quantization. Am I correct?

vjsrinivas commented 2 years ago

I tried using leave_enabled on the Imagenette recipe but get leave_enabled == True is only supported for GMPruningModifier.

Here is the recipe:

# General Epoch/LR variables
num_epochs: &num_epochs 10
lr: &lr 0.008

# Pruning variables
pruning_start_epoch: &pruning_start_epoch 1.0
pruning_end_epoch: &pruning_end_epoch 8.0
pruning_update_frequency: &pruning_update_frequency 0.5
init_sparsity: &init_sparsity 0.05

prune_low_target_sparsity: &prune_low_target_sparsity 0.8
prune_mid_target_sparsity: &prune_mid_target_sparsity 0.85
prune_high_target_sparsity: &prune_high_target_sparsity 0.9

training_modifiers:
  - !EpochRangeModifier
    start_epoch: 0.0
    end_epoch: *num_epochs

  - !SetLearningRateModifier
    start_epoch: 0.0
    learning_rate: *lr

pruning_modifiers:
  - !GMPruningModifier
    params:
      - sections.0.0.conv1.weight
      - sections.0.0.conv2.weight
      - sections.0.0.conv3.weight
      - sections.0.0.identity.conv.weight
      - sections.0.1.conv1.weight
      - sections.0.1.conv3.weight
      - sections.0.2.conv1.weight
      - sections.0.2.conv3.weight
      - sections.1.0.conv1.weight
      - sections.1.0.conv3.weight
      - sections.1.2.conv3.weight
      - sections.1.3.conv1.weight
      - sections.2.0.conv1.weight
      - sections.3.0.conv1.weight
      - classifier.fc.weight
    init_sparsity: *init_sparsity
    final_sparsity: *prune_low_target_sparsity
    start_epoch: *pruning_start_epoch
    end_epoch: *pruning_end_epoch
    update_frequency: *pruning_update_frequency
    leave_enabled: False

  - !GMPruningModifier
    params:
      - sections.0.1.conv2.weight
      - sections.0.2.conv2.weight
      - sections.1.0.identity.conv.weight
      - sections.1.1.conv1.weight
      - sections.1.1.conv2.weight
      - sections.1.1.conv3.weight
      - sections.1.2.conv1.weight
      - sections.1.2.conv2.weight
      - sections.1.3.conv2.weight
      - sections.1.3.conv3.weight
      - sections.2.0.conv3.weight
      - sections.2.0.identity.conv.weight
      - sections.2.1.conv1.weight
      - sections.2.1.conv3.weight
      - sections.2.2.conv1.weight
      - sections.2.2.conv3.weight
      - sections.2.3.conv1.weight
      - sections.2.3.conv3.weight
      - sections.2.4.conv1.weight
      - sections.2.4.conv3.weight
      - sections.2.5.conv1.weight
      - sections.2.5.conv3.weight
      - sections.3.1.conv1.weight
      - sections.3.2.conv1.weight
    init_sparsity: *init_sparsity
    final_sparsity: *prune_mid_target_sparsity
    start_epoch: *pruning_start_epoch
    end_epoch: *pruning_end_epoch
    update_frequency: *pruning_update_frequency
    leave_enabled: False

  - !GMPruningModifier
    params:
      - sections.1.0.conv2.weight
      - sections.2.0.conv2.weight
      - sections.2.1.conv2.weight
      - sections.2.2.conv2.weight
      - sections.2.3.conv2.weight
      - sections.2.4.conv2.weight
      - sections.2.5.conv2.weight
      - sections.3.0.conv2.weight
      - sections.3.0.conv3.weight
      - sections.3.0.identity.conv.weight
      - sections.3.1.conv2.weight
      - sections.3.1.conv3.weight
      - sections.3.2.conv2.weight
      - sections.3.2.conv3.weight
    init_sparsity: *init_sparsity
    final_sparsity: *prune_high_target_sparsity
    start_epoch: *pruning_start_epoch
    end_epoch: *pruning_end_epoch
    update_frequency: *pruning_update_frequency
    leave_enabled: False

The docs doesn't mention that leave_enabled cannot be False, but the validate code enforces that.

clementpoiret commented 2 years ago

It's strange given that the doc mentions "should be set to false if..."

markurtz commented 2 years ago

Hi @vjsrinivas and @clementpoiret, glad you've been able to use it and successfully prune some models! The leave_enabled flag is specifically used to reapply the masks when continuing to train after. Without this, the weights will gradually diverge from their masked 0 values as gradient descent continues to update them.

For your question, the weights are removed in an unstructured way so there is no current way to convert that into a structured reduction in the weight matrix dimensions. This means you'll need to use an Engine that supports the unstructured sparsity for performance and/or memory reduction such as DeepSparse. We are actively working on a TensorRT setup as well which has support for sparsity on the newer Ampere GPUs.

If you're looking to reduce the file sizes, you can run a compression algorithm over them to realize the compression gains. Finally, we are working on structured pruning support now and expect to land that in the next week if you're interested in going this route. Structured will prune away whole channels or filters at once enabling decreases in model sizes and faster inference on any deployment environment. The downside is that the maximum sparsity you can achieve will be much less.

Thanks, Mark

vjsrinivas commented 2 years ago

@markurtz Thank you for the information! I will be eagerly waiting for the structured pruning updates. I'll close this since my main questions have been answered.