aim-uofa / AdelaiDet

AdelaiDet is an open source toolbox for multiple instance-level detection and recognition tasks.
https://git.io/AdelaiDet
Other
3.38k stars 648 forks source link

Unable to quantize ABCNet #143

Closed Rahul-Sridhar closed 4 years ago

Rahul-Sridhar commented 4 years ago

Thank you for your work!

I have tried to quantize ABCNet but have been unsuccessful due to the below reasons:

Yuliang-Liu commented 4 years ago

@blueardour Would you help out this issue?

blueardour commented 4 years ago

@Rahul-Sridhar Hi, might I know what is the purpose of the quantization?

Seemed you were using the default quantization utilization of pytorch for compressing the model. As far as I know, it realizes 8-bit fix point quantization and the framwork only takes charging in obtaining the 8-bit model. If you want to run faster on dedicated platforms, for example, the Raspi ARM board, it might requires you further implement the 8-bit inference.

If you only want the 8-bit model, could you show what kind error you met?

Rahul-Sridhar commented 4 years ago

@blueardour I wanted to convert ABCNet to int8 format with PyTorch quantization framework for deploying on Raspberry Pi. While doing so I met the below problem:

blueardour commented 4 years ago

Hi, @Rahul-Sridhar

For the question.

Rahul-Sridhar commented 4 years ago

Hi @blueardour,

blueardour commented 4 years ago

Hello @Rahul-Sridhar,

From my perspective, post-quantization (or zero-short quantization) suffers higher risk of sizable performance gap. It is not recommended only if the training dataset is not available. When conducting zero-shot quantization, using cutting-edge methods such as the DFQ rather than the pytorch native 8-bit quantization is more promising to get reasonable performance.

If you have the training dataset, and hope the performance to be as high as possible, fine-tuning with algorithms such as Dorefa-Net/LSQ/LQ-nets is suggested. For a vast kinds of tasks, 8-bit is enough to obtain the same or even better performance compared with the full precision model (quantization has some regularization effect). The ahead mentioned project provides support of different quantization algorithms. Many of them (LSQ recommended) support any bit quantization (of course, can be used in 8-bit quantization).

Rahul-Sridhar commented 4 years ago

Hi @blueardour,

I will try the above techniques. Thanks for suggesting the above resources. It is very helpful.