Closed gogocoded closed 1 year ago
我使用QAT权重输出onnx, python qat_export.py --weights yolov6s_v2_reopt.pt --quant-weights yolov6s_v2_reopt_qat.pt --graph-opt --export-batch-size 1得到如下结果: Evaluate annotation type bbox DONE (t=18.93s). Accumulating evaluation results... DONE (t=2.18s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.423 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.625 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.449 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.153 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.468 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.614 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.459 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.655 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.671 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.417 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.746 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.841 (0.6246530867614292, 0.4231134092308667) WARNING: Logging before flag parsing goes to stderr. W1209 17:50:23.896001 139924396488448 tensor_quantizer.py:281] Use Pytorch's native experimental fake quantization. /opt/conda/lib/python3.6/site-packages/pytorch_quantization/nn/modules/tensor_quantizer.py:286: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! inputs, amax.item() / bound, 0, /opt/conda/lib/python3.6/site-packages/pytorch_quantization/nn/modules/tensor_quantizer.py:292: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! quant_dim = list(amax.shape).index(list(amax_sequeeze.shape)[0]) /opt/conda/lib/python3.6/site-packages/pytorch_quantization/nn/modules/tensor_quantizer.py:286: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! inputs, amax.item() / bound, 0, /opt/conda/lib/python3.6/site-packages/pytorch_quantization/nn/modules/tensor_quantizer.py:292: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! quant_dim = list(amax.shape).index(list(amax_sequeeze.shape)[0]) /data/custom_dataset/YOLOv6-0.2.1/yolov6/assigners/anchor_generator.py:12: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results). for i, stride in enumerate(fpn_strides): 最终保存的只有predictions.json,这是为什么?
Before Asking
[X] I have read the README carefully. 我已经仔细阅读了README上的操作指引。
[X] I want to train my custom dataset, and I have read the tutorials for training your custom data carefully and organize my dataset correctly; (FYI: We recommand you to apply the config files of xx_finetune.py.) 我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。(FYI: 我们推荐使用xx_finetune.py等配置文件训练自定义数据集。)
[X] I have pulled the latest code of main branch to run again and the problem still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
Search before asking
Question
当我使用敏感层分析后,设置sensitive_layers_skip=True进行qat训练,后使用qat_export导出时就会出现下图的错误。
config文件如下图。
Additional
No response