leimao / PyTorch-Quantization-Aware-Training

PyTorch Quantization Aware Training Example
https://leimao.github.io/blog/PyTorch-Quantization-Aware-Training/
MIT License
123 stars 34 forks source link

post training static quantization #3

Closed apanand14 closed 1 year ago

apanand14 commented 2 years ago

Hello, Thank you for providing your code but If I would like to do only post training static quantization then should I use the same code? I mean you're doing the quantized aware training. Please le me know what should I use from your code. Thank you in advance

leimao commented 2 years ago

Please read https://leimao.github.io/blog/PyTorch-Static-Quantization/

apanand14 commented 2 years ago

Thank you for answer! I understood mostly but I have one mode question. I'm using Faster RCNN with ResNet34 backbone. The procedure for the ResNet 34 will be the same as ResNet 18 but Should I take care of Faster RCNN module as well? Ans why do we need calibration for quantized model? It will be helpful if you answer these doubts. Thank you in advance.

leimao commented 2 years ago

I suggest you need to understand what quantization is before using quantization. Please read https://leimao.github.io/article/Neural-Networks-Quantization/