Open Silvan-K opened 3 years ago
This issue has been automatically marked as stale due to inactivity and will be closed in 7 days if no further activity occurs. If further support is needed, please provide an update and/or more details.
@yufenglee, I was just wondering what the the status of this is. Has anyone been able to reproduce the problem?
hello @Silvan-K
https://github.com/microsoft/onnxruntime/blob/3d7518762ace6929be98e1203174c2dbf1ac094e/onnxruntime/python/tools/quantization/operators/direct_q8.py#L72-L78 it seems like maxpool skips QDQ if previous activation is not quantized
in the most recent version
you can set ForceQuantizeNoInputCheck
flag True to avoid such behaviour
Describe the bug MaxPool nodes are not getting quantized if a preceding Relu is not getting quantized.
Urgency
Development of a backend is blocked by this, so it would be great if someone could provide some insights as soon as possible.
System information
To Reproduce
Expected behavior
Would expect a QuantizeLinear node before the Maxpool node in the quantized model.