Closed youdutaidi closed 2 years ago
Sorry, we have no plan to release the code of detection at present.
However, I can give you some insights about the implement. We use the official code of SwinTransformer. Firstly, we manually replace full precision layers (such as Conv2d, Linear, Act, LayerNorm, Attention, etc) with quantized version (here). Secondly, we apply the calibration step (just a few forward) for that detector in MMDet framework just like here. After those steps, we will obtain a calibrated detector which is able to be quantized by PTQ.
I saw your paper shows experiments on COCO datasets, but I tested it on COCO, but ImoU performs very poorly.