RnDProjectsDeebul / MohanRajRnD

Apache License 2.0
1 stars 1 forks source link

Quantising a PyTorch model in TensorFlow Framework #7

Open mohanrajroboticist opened 1 year ago

mohanrajroboticist commented 1 year ago

Performing post training full integer quantisation of PyTorch developed model in TensorFlow framework throws following error

PyTorch model -> ONNX model -> TensorFlow model -> TensorFlow PTQ Quantisation

image

Reference: https://stackoverflow.com/questions/66957392/tensorflow-lite-runtimeerror

mohanrajroboticist commented 1 year ago

https://colab.research.google.com/drive/1yv_F0dbz4uF5P5XhdzoyhJnwI1QZdP2A?usp=sharing

deebuls commented 1 year ago
deebuls commented 1 year ago

https://github.com/onnx/onnx-tensorflow/issues/862

https://github.com/onnx/onnx-tensorflow/issues/862#issuecomment-776303182

Currently ONNX supports NCHW only. That means the model and node inputs must be in NCHW so the operators can work according to the specs. In order to support NHWC, additional option is needed in ONNX to indicate the data format, NCHW or NHWC.

deebuls commented 1 year ago

The main problem is converting from pytorch to onnx to tensorflow . Even though it doesnot give any error . The convertor (onnx_tf) is saving the model in the format of pytorch itself. Ans they have an open issue on this .

mohanrajroboticist commented 1 year ago

Channels-First: NCHW - The channels come before the height and width dimensions

Channels-Last: NHWC - The channels come after the height and width dimensions