Tencent / PocketFlow

An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
https://pocketflow.github.io
Other
2.79k stars 491 forks source link

Did you have the performance summary of the RL-based quantization performance on MobileNet? #115

Closed brisker closed 5 years ago

brisker commented 5 years ago

Did you have the performance summary of the RL based quantization performance on MobileNet? Given that you have described about this here : https://pocketflow.github.io/reinforcement_learning/

haolibai commented 5 years ago

Experimentally, for higher learning rate the fine-tuning of quantization will not be stable, and sometimes the loss will turn to be NaN. You are also welcome to help us find how to tune the quantized network with larger learning rate.

brisker commented 5 years ago

@haolibai

In resnet-imagenet RL experiment, the acc may be normal. But when it comes to mobilenet, the reward got too many 0 in the roll-out process. I think it is due to the uniform quantization algorithm itself ( it is not a state-of-art model quantization paper. Is it a paper from Google?). So here I am curious about the performance you mentioned here. Is there also many 0 acc during the roll-out process in the experiment there?

brisker commented 5 years ago

@haolibai Now I can also reproduce the resnet-18-4bit rl-based experiment performance, around 67% top1-acc, but the mobilenet-rl-based experiment just still got too many 0 accs in the roll-out training process. Have you reproduce the mobilenet-imagenet-rl-experiment , except the resnet experiment here ?

haolibai commented 5 years ago

Hi. Not yet. We are recently short of GPUs. We will inform you immediately once we get the result. You can also try to adjust some parameters, such as the learning rate for RL global training, and narrow down the gap between w_bit_min and w_bit_max so as to avoid too many 2 bit quantizations.

brisker commented 5 years ago

@haolibai @jiaxiang-wu Is some bug found now, regarding to the RL-quantization code? BTW,

  1. I found a bug in the code here: https://github.com/Tencent/PocketFlow/blob/master/learners/uniform_quantization/learner.py#L279 I think it should be images.set_shape((FLAGS.batch_size_eval, images.shape[1], images.shape[2],images.shape[3])) not images.set_shape((FLAGS.batch_size, images.shape[1], images.shape[2],images.shape[3]))
jiaxiang-wu commented 5 years ago

Yes, it should be FLAGS.batch_size_eval instead of FLAGS.batch_size. Thanks for pointing it out. For the potential bug in the RL-quantizaiton code, our team does not have enough people to cover that, at least for the moment. If it is urgent, you can try to debug it by yourself. @brisker

brisker commented 5 years ago

@jiaxiang-wu when will the 2.0 version be released? what will be new features?

jiaxiang-wu commented 5 years ago

@brisker Possibly at the end of 2019Q1. We are now adding support for object detection models (e.g. SSD) and RNN/LSTM models.

xiaomr commented 5 years ago

@brisker @haolibai Have you reproduce the mobilenet-imagenet-rl-experiment , as reported here, I folowed the default scripts and encountered the same acc 0 issue as @brisker