Open AhmedHessuin opened 3 years ago
Sorry for the late reply, there's a config for short QAT training: https://github.com/facebookresearch/d2go/blob/main/configs/qat_faster_rcnn_fbnetv3a_C4.yaml; the mask_rcnn_fbnetv3a_C4.yaml
could be done in the same way by changing the _BASE_
.
🚀 Feature
A clear and concise description of the feature proposal.
Motivation & Examples
can we add Quantization-aware Training for mask_rcnn_fbnetv3a_C4.yaml ? Tell us why the feature is useful. this will help in creating better models