Closed Jerryzhangzhao closed 3 years ago
I found out that there is a bug in the conversion behavior of Densify
in Float16
. Please wait for a while while we fix it.
Thank you for your reply. Looking forward to your good news.
Fixed a conversion bug in Densify
. I will update the tool soon.
import tensorflow as tf
import numpy as np
import pprint
interpreter = tf.lite.Interpreter('pose_detection.tflite')
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_data = np.ones([1,224,224,3], dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data1 = interpreter.get_tensor(output_details[0]['index'])
output_data2 = interpreter.get_tensor(output_details[1]['index'])
resultA = [output_data1, output_data2]
print('############################################')
interpreter = tf.lite.Interpreter('model_float32.tflite')
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_data = np.ones([1,224,224,3], dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
pprint.pprint(output_details)
output_data1 = interpreter.get_tensor(output_details[0]['index'])
output_data2 = interpreter.get_tensor(output_details[1]['index'])
resultB = [output_data1, output_data2]
print('')
print('')
print(f'@@@@@@@@@@@@@@@@@@@@@@@@@@@ output1.shape: {resultA[1].shape}')
print('================== pose_detection.tflite')
print(resultA[1])
print('================== model_float32.tflite')
print(resultB[0])
print(f'matching result: {(resultA[1] == resultB[0]).all()}')
print('')
print(f'@@@@@@@@@@@@@@@@@@@@@@@@@@@ output2.shape: {resultA[0].shape}')
print('================== pose_detection.tflite')
print(resultA[0])
print('================== model_float32.tflite')
print(resultB[1])
print(f'matching result: {(resultA[0] == resultB[1]).all()}')
$ python3 test.py
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
############################################
[{'dtype': <class 'numpy.float32'>,
'index': 255,
'name': 'Identity_1:0',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([ 1, 2254, 1], dtype=int32),
'shape_signature': array([ 1, 2254, 1], dtype=int32),
'sparsity_parameters': {}},
{'dtype': <class 'numpy.float32'>,
'index': 259,
'name': 'Identity:0',
'quantization': (0.0, 0),
'quantization_parameters': {'quantized_dimension': 0,
'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32)},
'shape': array([ 1, 2254, 12], dtype=int32),
'shape_signature': array([ 1, 2254, 12], dtype=int32),
'sparsity_parameters': {}}]
@@@@@@@@@@@@@@@@@@@@@@@@@@@ output1.shape: (1, 2254, 1)
================== pose_detection.tflite
[[[-251.04146 ]
[ -81.197624]
[-705.5628 ]
...
[ -58.29257 ]
[ -59.44347 ]
[ -58.837376]]]
================== model_float32.tflite
[[[-251.04146 ]
[ -81.197624]
[-705.5628 ]
...
[ -58.29257 ]
[ -59.44347 ]
[ -58.837376]]]
matching result: True
@@@@@@@@@@@@@@@@@@@@@@@@@@@ output2.shape: (1, 2254, 12)
================== pose_detection.tflite
[[[-1.71382790e+01 5.69697990e+01 5.17359581e+01 ... -2.57987633e+01
1.23056160e+02 -3.87240562e+01]
[-3.73491402e+01 8.26324368e+00 4.42917137e+01 ... 1.18147697e+02
-3.52648254e+02 -5.23823364e+02]
[-3.37248802e+01 6.19345665e+01 -1.27373638e+01 ... -1.38262711e+02
7.00320358e+01 8.63410034e+01]
...
[ 3.76767546e-01 5.93886137e-01 2.33805090e-01 ... -6.88491344e-01
-9.57275778e-02 -1.07318856e-01]
[ 2.64693379e-01 -1.73429400e-01 2.06973523e-01 ... 4.67089027e-01
1.69045091e-01 -1.27619398e+00]
[-5.13797581e-01 2.89796919e-01 2.30508775e-01 ... -2.50154823e-01
-3.97548974e-01 8.79177511e-01]]]
================== model_float32.tflite
[[[-1.71382790e+01 5.69697990e+01 5.17359581e+01 ... -2.57987633e+01
1.23056160e+02 -3.87240562e+01]
[-3.73491402e+01 8.26324368e+00 4.42917137e+01 ... 1.18147697e+02
-3.52648254e+02 -5.23823364e+02]
[-3.37248802e+01 6.19345665e+01 -1.27373638e+01 ... -1.38262711e+02
7.00320358e+01 8.63410034e+01]
...
[ 3.76767546e-01 5.93886137e-01 2.33805090e-01 ... -6.88491344e-01
-9.57275778e-02 -1.07318856e-01]
[ 2.64693379e-01 -1.73429400e-01 2.06973523e-01 ... 4.67089027e-01
1.69045091e-01 -1.27619398e+00]
[-5.13797581e-01 2.89796919e-01 2.30508775e-01 ... -2.50154823e-01
-3.97548974e-01 8.79177511e-01]]]
matching result: True
Great , thanks @PINTO0309
Great , thanks @PINTO0309
hi, can you upload the pose_detection.onnx model?
Hi , thanks for your great work.
recently, I am working on pose estimation with mediapipe. I convert the pose_detection.tflite model to onnx with your tflite2tensorflow, the coversion processing is ok, the log info shows that the converison is success. But when I use the converted .onnx model the outpt value, seems not correct, and is different from what the orginal .tflite model dose.
In the original tflite model, the max confidence value of bounxing box is 0.9, but using the converted model , the max value of bounxing box is only 0.078, which is not correct. And I also tried the model you've already converted, the result is also not right. Is there something wrong in my step or code?
1. WIndows10
2. x86_64
3. Version of OpenVINO : none
4. Version of TensorFlow e.g. v2.6.0
5. Version of TensorRT : none
6. Version of TFJS : none
7. Version of coremltools : none
8. Version of ONNX : 1.10.1
9. Download URL for .tflite IR model
10. URL of the repository from which the transformed model was taken : https://github.com/google/mediapipe/tree/master/mediapipe/modules/pose_detection
11. URL or source code for simple inference testing code
12. Issue Details