Open zccyman opened 5 months ago
@chenfucn, could you help take a look?
This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
Describe Question:
when I use W16A16 quantization in main branch, I cann't get qdq_error and xmodel_err, because I found that qdq_error and xmodel_err are all empty. I eventually found that model.graph.value_info is same as the input model after the function "load_model_with_shape_infer", so I don't know how to fix it. I debug my code with "mobilenetv2-7.onnx".