Open jinevening opened 11 months ago
I used the below script to compare the result of onnx and tflite.
$ onecc import onnx -i bn.onnx -o bn.circle --save_intermediate
import tensorflow as tf
import numpy as np
import onnxruntime
onnx_path = 'bn.onnx'
tflite_path = 'bn.tflite'
sess = onnxruntime.InferenceSession(onnx_path, None)
input_name = sess.get_inputs()[0].name
i = np.random.rand(1,512,127,1).astype(np.float32)
onnx_o = sess.run([], {input_name: i})
print(onnx_o)
tf_interp = tf.lite.Interpreter(tflite_path)
tf_interp.allocate_tensors()
output = tf_interp.get_output_details()[0] # Model has single output.
input = tf_interp.get_input_details()[0] # Model has single input.
tf_interp.set_tensor(input['index'], i)
tf_interp.invoke()
tflite_o = tf_interp.get_tensor(output['index'])
print(tflite_o)
print(np.allclose(onnx_o, tflite_o, rtol=1.e-5, atol=1.e-5))
Q) are the constant values of BN same for onnx vs tflite ? I'm curious if onnx-runtime does something additional if onnx has training mode...
Constant values are changed when onnx is converted to tflite. In onnx, they are mean/variance, but in tflite, they are channelwise mul/add.
I'm curious if onnx-runtime does something additional if onnx has training mode...
I guess so.
What
If an onnx model includes a batchnorm Op and is exported in a training mode, the converted tflite model works differently from the original onnx model.
If
training=TrainingMode.TRAINING
is removed, everything works fine.