yiwenguo / Dynamic-Network-Surgery

Caffe implementation for dynamic network surgery.
Other
186 stars 70 forks source link

Alexnet training process and hyperparameter #18

Closed kai-xie closed 7 years ago

kai-xie commented 7 years ago

Hi Yiwen,

I was trying to prune Alexnet but only made little progress. Would you please share the detailed training process and hyperparameters?

The training tricks you provided in #12 are very useful, but I still cannot reproduce the results in your paper. Here are some problems I've encountered during the pruning process:

Thank you very much for your patience! It would be of great help if more detailed training process could be offered.

Thanks!

yiwenguo commented 7 years ago

Hi @kai-xie , I don't understand. If you keep the same hyper-parameters of conv layers in the second phase, wouldn't the algorithm keeps pruning these layers? When saying pruning the layers separately, I meant not to further prune or splice the conv layers when pruning fully connected layers (there are a bunch of ways but you can simply do this by setting a zero or negative number to iter_stop for these layers). Also, I didn't really encounter the learning rate problem as you did, but the pruning rates do change (obviously not to 100% or 0%) during training and that's how the algorithm works. As in your case, I think you can first try what I said and maybe larger c_rates in fully connected layers to see if the pruning still fails.

kai-xie commented 7 years ago

@yiwenguo Thanks for your reply! I will try again to see how it works.

kai-xie commented 7 years ago

It worked when training conv and ip layers seperately by controlling the iter_stop. Thank you very much! @yiwenguo

haithanhp commented 7 years ago

Hi @kai-xie , I have some concerns:

Thanks.

kai-xie commented 7 years ago

@HaiPhan1991

1. pruning ( fine-tuning )

tips: for a proof of the pruning concept, don't set the c_rate too large, [-1, 1] would be a good choice to start with, or try out with mnist first.

2. check the pruning rate

I used python scripts to do this. The python API is not provided in this repo, but you can work this around by compiling the caffe.proto manually, then extract the weights/bias and mask blobs using your own python scripts.

I am also trying to apply DNS to a newer version of caffe, so that the python API could be used. Here is my repo. In my version, after compiling the caffe and pycaffe, prepare your compressed DNS caffemodel, and run the following command from your CAFFE_ROOT (make sure you have set CAFFE_ROOT environment variable, which is the dir of you caffe folder) :

python compression_scripts/dns_to_normal.py <dns.prototxt> <dns_model.caffemodel> <target.prototxt> <output_target.caffemodel>

The compression rate should be shown on the screen, and the output_target.caffemodel should have the same size as a normal caffemodel (about 1/2 of the dns_model.caffemodel) which can be tested.

e.g.

python compression_scripts/dns_to_normal.py examples/mnist/dns_train_val.prototxt examples/mnist/dns_iter_10000.caffemodel examples/mnist/mnist_train_val.prototxt examples/mnist/mnist_test.caffemodel

My repo is still under development, so a little bit messy with files and folders, but it works fine with small pruning rate (i.e. small c_rate), but would be buggy with large c_rate. Still working on it.

Hope this would help.

haithanhp commented 7 years ago

Awesome! Thank you for your detail instructions. It's really helpful. I am doing on ImageNet dataset, hope it works well.

hex0102 commented 6 years ago

Hi @kai-xie , I'm working on the DNS recently. For Problem 3 you pointed out, does the constant setting of mu and std work finally? I update the mu and std every iteration. And I find the pruning rarely changes between iterations.