onnx / tensorflow-onnx

Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
Apache License 2.0
2.33k stars 432 forks source link

Hardcoded `UInt8` idtype in FakeQuantWithMinMaxArgs causes error #1764

Open masahi opened 3 years ago

masahi commented 3 years ago

These two lines look odd to me... Why hard code the input dtype to uint8? https://github.com/onnx/tensorflow-onnx/blob/482330f9958eb45c805933f04e2b0a5c7a494f23/tf2onnx/onnx_opset/quantize.py#L57 https://github.com/onnx/tensorflow-onnx/blob/482330f9958eb45c805933f04e2b0a5c7a494f23/tf2onnx/onnx_opset/quantize.py#L63-L68

I got the following error when converting QAT-ed yolo4 model from https://github.com/openvinotoolkit/nncf/tree/develop/examples/tensorflow/object_detection. TF saved model directory was uploaded to https://drive.google.com/file/d/1SA25mRzQ9Fi5OpTVWiODeoU28kXtvhJi/view?usp=sharing for repro.

ValueError: make_sure failure: Cannot convert FakeQuantWithMinMaxVars node StatefulPartitionedCall/StatefulPartitionedCall/yolo_v4/image_input/fake_quantize/AsymmQuant/FakeQuantWithMinMaxVars with min=0.02735152840614319 max=0.9701830148696899 numbits=8 because zero_scale=-7.0 is outside uint8 boundary

I was able to export this model by replacing that idtype to Int8.

hwangdeyu commented 2 years ago

@masahi sorry for the late reply. The reason why the FakeQuantWithMinMaxArgs only supports unit8 is the quantization range belonged.

inputs values are quantized into the quantization range ([0; 2^num_bits - 1] when narrow_range is false and [1; 2^num_bits - 1] when it is true)

Could you share the TF saved model directory unreachable link again you If it's possible? I would like to take a look and find the reason.