IntelLabs / distiller

Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
Apache License 2.0
4.34k stars 799 forks source link

Some confusions about splicing-pruning #538

Open luyuxiao opened 4 years ago

luyuxiao commented 4 years ago

You compute the mean and standard-deviation of the parameter once, and cache them. But it is said in the paper that not only the important parameters are updated, but also the ones corresponding to zero entries of masks. This means that the distribution of parameters are constantly changing. I also found that you only update the 'important' parameters. Where does the code reflect the author's special update method of parameters? 1600164195(1)