TexasInstruments / edgeai-torchvision

This repository has been moved. The new location is in https://github.com/TexasInstruments/edgeai-tensorlab
https://github.com/TexasInstruments/edgeai
Other
70 stars 21 forks source link

Accuracy drop in Centernet model QAT training #17

Open sathyapatel opened 1 year ago

sathyapatel commented 1 year ago

Hi,

I got 4% accuracy drop in QAT trained centernet model wrapped up with Xnn. QuantTrainModule. I'd tried with other utilites functions as you mentioned like " xnn.utils.freeze_bn(model) and xnn.layers.freeze_quant_range(model) " but, still no improvement in an evaluation.

original trained centernet weight file precision is 90% and recall is 90% trained QAT centernet weight file precsion is 86.6% and recall is 74%

Can you suggest me how to improve accuracy ?

leadcain commented 1 year ago

Anchor Free Detector is critical when you set 8-bit QAT because of quantization error.

and it will be increased such as exp() operation for object coordinate.

If you show the inference result, you see the effect of the quantization error.

You can choice two options.(common practice)

  1. set 16-bit First Layer and Last layers. I am trying it myself, and I also use my custom anchor-free detector

  2. Change Detection Header to Anchor Based Header and set more anchors like YOLO series considering quantization error.

  3. you can use PTQ with mixed-precision. TiDL PTQ is quite stable approach. My custom multi-task model used it and not bed.
    but I want to get more accurate and quantized model for the latency.