SKKU-ESLAB / Auto-Compression

Automatic DNN compression tool with various model compression and neural architecture search techniques
MIT License
20 stars 20 forks source link
machine-learning model-compression neural-architecture-search

DNAS-Compression

Model compression techniques with differentiable neural architecture search.

Currently, pruning and quantization are supported.

References

This project is implemented based on FBNet reproduced version.

Usage

  1. Choose what type of compression would you run

    • Pruning ( channel / group )
    • Quantization
    • Both pruning and quantization
  2. Edit hyperparmeters in supernet_functions/config_for_supernet.py

    • Usual hyperparameters
      • batch size
      • learning rate
      • epochs
    • Special hyperparameters (pay attention to it!)
      • alpha, beta for adjustment of flops loss
      • w_share_in_train
      • thetas_lr
      • train_thetas_from_the_epoch
  3. Run supernet_main_file.py

    Quick start command:

    python3 supernet_main_file.py --train_or_sample train