Tencent / PocketFlow

An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
https://pocketflow.github.io
Other
2.78k stars 490 forks source link

Resnet20 cifar10 compress experiment cant get a small model #30

Closed xiaomr closed 5 years ago

xiaomr commented 5 years ago

Hello, I run the example in tutorial of cifar10 resnet20 experiment, in ChannelPrune Mode, after the compress procedure and the finetune procedure, I get some ckpt models in ./modes, such as original_model.ckpt, pruned_model.ckpt., best_model.ckpt.*, however, all of those models are 1.1M in my disk, It seems that compression doesn't work , I inspect to the variable of ckpt models, the shape of conv kernels are not smaller, either, How can I get the compressed model to test inference speedup use ckpt model? I use the following command and ./scripts/run_seven.sh nets/resnet_at_cifar10_run.py \ --learner channel \ --batch_size_eval 64 \ --cp_uniform_preserve_ratio 0.5 \ --cp_prune_option uniform \ --resnet_size 20

jiaxiang-wu commented 5 years ago

After channel pruning, the resulting checkpoint files still save weight tensors in their original sizes, including those all-zero channels. Therefore, the checkpoint files will not be smaller. We have provided a model conversion script, tools/conversion/export_pb_tflite_models.py, to generate .pb and .tflite models that are smaller after channel pruning. See the tutorial documentation for detailed usage.

psyyz10 commented 5 years ago

@xiaomr Or you can delete the channels whose elements are all zeros by yourself.

xiaomr commented 5 years ago

Thanks for the explanation, It works! with the help of tools/conversion/export_pb_tflite_models.py, I
got the smaller compressed pb。Here comes a few another question, Firstly, it seems this tool only support the fullprecison mode and dis_chn_pruned mode,because both of them have the collection of images final and logits final which are necessary for this tool, when I use this tool for channel pruning mode, it will throw out error. Second, In my experiment the inference time of fullprecision mode and dis_chn_pruned mode is the same , in details, it seems during convertion it will add many 1*1 convolution for channel selection , which means pruning doesn't get actual speedup in GPU?

xiaomr commented 5 years ago

@jiaxiang-wu @psyyz10

jiaxiang-wu commented 5 years ago
  1. We are investigating the model conversion issue for channel pruning module.
  2. After channel pruning, 1x1 convolution will be added for channel selection and this will bring speed-up for CPU-based inference with TF-Lite, according to our evaluation results. The speed-up on GPU may be negligible.
jiaxiang-wu commented 5 years ago

Bug: cannot convert the compressed model from channel pruning module to .pb & .tflite models.

xiaomr commented 5 years ago

@jiaxiang-wu , Tank you ! Another observation makes me confused is that in full precision mode, model_transformed.pb is much faster than model_original.pb, about 5X speed up in GPU , the channels are not be pruned, so why the graph edition (seems only contain dropout related edition) can accelerate so much?

jiaxiang-wu commented 5 years ago

@xiaomr Can you list the detailed time comparison, and describe how these two models are obtained?

xiaomr commented 5 years ago

@jiaxiang-wu The details are as following:
I set the mode of full precision to train resnet20 cifar10 and I get checkpoint file of model.ckpt.data-00000-of-00001, model.ckpt.index and model.ckpt.meta in ./model_eval after training, then I use tools/conversion/export_pb_tflite_models.py to convert model, I add inference time test code in test_pb_model function of the script, test the average time of sess.run of several samples except for the first 10 warm up, the tools will generate two pbs files in the directory, which are called model_original.pb and model_transformed.pb, time is 122ms and 20 ms respectly in my GPU , I am confused about the acceleration 。

jiaxiang-wu commented 5 years ago

@xiaomr Could you please upload all the files under the models_eval directory, so that we can re-produce your issue?

xiaomr commented 5 years ago

models_eval.zip @jiaxiang-wu @psyyz10

psyyz10 commented 5 years ago

@xiaomr full precision learner will not do any compression, it is just a learner to train a model. Please use other learners to compress the model.