Open mRaskovic opened 3 years ago
This bug was fixed in master #8788. Please try our nightly-build or older version 1.7.
This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
Describe the bug The example from the readme file for model quantization doesn't work. Location: https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/quantization/E2E_example_model/image_classification/cpu
It crashes for the command: python run.py --input_model mobilenetv2-7.onnx --output_model mobilenetv2-7.quant.onnx --calibrate_dataset ./test_images/
The error I'm getting:
But it works properly for Resnet: python run.py --input_model .\resnet50-v1-13.onnx --output_model \resnet50-v1-13.quant.onnx --calibrate_dataset ./test_images/
Urgency Not urgent
System information
To Reproduce Clone the repository and run the command from the readme file for the specified script.
Expected behavior It should perform the network quantization and compare inference times for FP32 and the quantized versions. The same way it works for resnet.