Open KangYuan1233 opened 1 year ago
Thanks. You can refer to https://mmrazor.readthedocs.io/en/main/user_guides/quantization_user_guide.html for basic usage and examples. For openmmlab's models, it is theoretically possible to apply qat to your model by changing the config file.
Thanks. You can refer to https://mmrazor.readthedocs.io/en/main/user_guides/quantization_user_guide.html for basic usage and examples. For openmmlab's models, it is theoretically possible to apply qat to your model by changing the config file.
For MMdet models it's always complicated, containing several parts like backbone, neck and head in different config files. Would the team give some examples on this?
PS: The QAT guide provided model cannot work fine in mmrazor 1.0.0, which already have issue #602 on it. I solve it by using mmcls==1.0.0rc6
Hello,
Firstly, thank you for your work on this repository. The implementation of Quantization Aware Training (QAT) and Post Training Quantization (PTQ) is very helpful.
I've been trying to apply QAT to models using mmdet and I would appreciate if you could provide more examples on how to do this.
Thank you for considering my request.
Best,