megvii-research / FQ-ViT

[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Apache License 2.0
301 stars 48 forks source link

Could you offer code of PTQ test on COCO datasets ? #12

Closed youdutaidi closed 2 years ago

youdutaidi commented 2 years ago

I saw your paper shows experiments on COCO datasets, but I tested it on COCO, but ImoU performs very poorly.

linyang-zhh commented 2 years ago

Sorry, we have no plan to release the code of detection at present.

However, I can give you some insights about the implement. We use the official code of SwinTransformer. Firstly, we manually replace full precision layers (such as Conv2d, Linear, Act, LayerNorm, Attention, etc) with quantized version (here). Secondly, we apply the calibration step (just a few forward) for that detector in MMDet framework just like here. After those steps, we will obtain a calibrated detector which is able to be quantized by PTQ.