Closed xingyueye closed 2 years ago
'cali_batchsize: 16' means the mqbench will use 16 batches to calibrate the model. If you have set your batch size to 4 and 4 gpus are used, there are 16x4x4=256 images used.
Please check the number of image used in calibration. I recommend 256 at least.
Hi~ I found that when I increase the 'cali_batchsize', the adv-ptq process is easily to broke down with the error of CUDA out of Memory
. I'm confused about how the calib_data is allocated during adv_ptq process?
@xingyueye set keep_gpu to False.
Sorry for such a late reply, as I was on vacation.
@PannenetsF same issue,although set keep_gpu to False
This issue has not received any updates in 120 days. Please reply to this issue if this still unresolved!
Hi~ I have quantized YOLOv5s models with naive-ptq and advanced-ptq respectively. The mAP of basemodel is 36.3, while naive-ptq is 35.8 and advanced-ptq is 35.0. The quantized results of advanced-ptq is worse than Naive-ptq. The setting of advanced-ptq as below:
Please help with how to get a better advanced-ptq results.