open-mmlab / mmrazor

OpenMMLab Model Compression Toolbox and Benchmark.
https://mmrazor.readthedocs.io/en/latest/
Apache License 2.0
1.48k stars 231 forks source link

[Feature] Request for Additional Examples of QAT Training with mmdet #540

Open KangYuan1233 opened 1 year ago

KangYuan1233 commented 1 year ago

Hello,

Firstly, thank you for your work on this repository. The implementation of Quantization Aware Training (QAT) and Post Training Quantization (PTQ) is very helpful.

I've been trying to apply QAT to models using mmdet and I would appreciate if you could provide more examples on how to do this.

Thank you for considering my request.

Best,

humu789 commented 1 year ago

Thanks. You can refer to https://mmrazor.readthedocs.io/en/main/user_guides/quantization_user_guide.html for basic usage and examples. For openmmlab's models, it is theoretically possible to apply qat to your model by changing the config file.

CaMi1le commented 1 year ago

Thanks. You can refer to https://mmrazor.readthedocs.io/en/main/user_guides/quantization_user_guide.html for basic usage and examples. For openmmlab's models, it is theoretically possible to apply qat to your model by changing the config file.

For MMdet models it's always complicated, containing several parts like backbone, neck and head in different config files. Would the team give some examples on this?

PS: The QAT guide provided model cannot work fine in mmrazor 1.0.0, which already have issue #602 on it. I solve it by using mmcls==1.0.0rc6