Closed tongyutyr closed 5 years ago
Hi @tongyutyr, your accuracy should only drop a few percentage after the first quantization epoch. The above image shows the validation accuracy for the training process. I could not upload the pretrained models to github due to the file size limitations but I have uploaded them else where now: https://cloud.ilabt.imec.be/index.php/s/rHLQK8jqMZB55GY
Hi @tongyutyr, your accuracy should only drop a few percentage after the first quantization epoch. The above image shows the validation accuracy for the training process. I could not upload the pretrained models to github due to the file size limitations but I have uploaded them else where now: https://cloud.ilabt.imec.be/index.php/s/rHLQK8jqMZB55GY
thanks. Is there a need to modify the code? I just run the imagenet_quantized.py that you provided.I am very confused about that.
thanks. Is there a need to modify the code? I just run the imagenet_quantized.py that you provided.I am very confused about that.
The only thing that you should have to modify is the path to imagenet. No other modifications are required.
thanks. Is there a need to modify the code? I just run the imagenet_quantized.py that you provided.I am very confused about that.
The only thing that you should have to modify is the path to imagenet. No other modifications are required.
Why is the model size not reduced?
This paper and thus the implementation focuses more on the algorithm than on the actual deployment to specific hardware. Thus all parameters are still 32-bit floats but the amount of unique values used is artificially limited to what an N-bit value could use.
See also this issue in the implementation of the authors of the paper: https://github.com/AojunZhou/Incremental-Network-Quantization/issues/36
hello ,after the first quantization epoch ,my acc@1 is 24.65, is it normal? I did nothing on the code that you provided. Can you provide the final quantization model of resnet18? thanks