Closed Ruiqi-Shen closed 3 months ago
Hi, It seems that there are no automatic checks at the moment. It says " 4 workflows awaiting approval" So we pull again. But the same situation. Can it be reviewed by reviewers directly? Thanks for the check in advance !!
Pruning and Training for MASE - Group 3
Functionality
Basic Elements:
Extenstions:
Getting started
How to run
Please make sure to use GPU environment for the experiments.
Please execute all of our programs in the
machop("mase/machop")
directory.If need pre-trained model, please put the pre-trained VGG7 model at
mase/test-accu-0.9332.ckpt
Our test function is
test_group3.py
inside the existing testing framework, run in command line using:You can also execute the transform function via the command line using
You might change configuration as you wish.
As there are too many configurations, we kept them inside a toml file at
configs/example/prune_retrain_group3.toml
Please refer to the file for default parameter values and to change them.Additionally, we provide a demo notebook,
group3.ipynb
, which is readily executable on Colab. (changeload_model
to Colab path)Example output
Below is a demonstration of an actual output under certain pruning prerequisites:
In summary, it is evident that the model can maintain or even slightly improve its validation accuracy while undergoing significant model compression, achieving the desired outcome.
Note: Actual model size reduction on hardware requires compiler-level modifications. Theoretical strategies still signify a major advancement, with potential drastic reductions upon compiler adjustments. Please refer to the detailed discussion in the report.
Model Storage
Note that we save the model for all passes.
For prune, quantization, train:
If you run the test, find the saved models at:
mase_output/group3_test
Or if you run the transform command, find the saved models at:
mase_output/{project}/software/transforms
For Huffman encoding, find the saved model at:
machop/chop/huffman_model.ckpt
Implementation Overview
Please refer to the Methodology part of the report for detailed illustration and visualization.
Overall Pipeline
Each component within the pipeline is executed through an autonomous pass within
transform.py
, allowing for the flexible selection and combination of passes to suit specific requirements.Pruning Methods
Specifically, below are all the pruning methods that we've implemented:
Weight pruning:
Different granualarities of weight pruning:
Activation pruning:
Different focus of activation pruning:
Please refer to
pruning_methods.py
for their specific names.For the detailed analysis on their principles and performance, as well as the multiple evaluating metrics, please refer to the report.
Training
We use PyTorch Lightning for model training.
The model is constructed with the specified architecture and loaded with pre-pruned weights.
Post-prune Quantization & Huffman Coding
Additionally, inspired by the methodology fromDEEP COMPRESSION: COMPRESSING DEEP NEURAL NETWORKS WITH PRUNING, TRAINED QUANTIZATION AND HUFFMAN CODING, we've implemented post-prune quantization and Huffman Coding.
The post-prune quantization convert the 32-bit float data into an 8-bit format with 4 bits allocated for the fractional part.
Huffman Encoding takes the advantage of the newly quantized data, it uses variable-length encoding to encode more common weight values with shorter bits, effectively compressing the model further.
By default, these two further model compression techniques are enabled, but you can choose to disable them by commenting all
passes.quantize
and setis_huffman = false
Note that quantization must be valid for Huffman encoding.
Train from scratch && Transferability to other models and datasets
By default, the model loads the pre-trained VGG7 model for pruning and training.
If desired, you can opt to train from scratch by setting
load_name = None
.Moreover, you are free to select different datasets and models. The ResNet18 network and colored-MNIST are fully compatible with our pipeline and yield satisfactory results. To utilize these, please modify the toml configuration as follows:
Contact
Feel free to contact us at ruiqi.shen23@imperial.ac.uk if you have encountered any problems.