Closed TimYao18 closed 5 months ago
The outputs of the step 3 during conversion to TFLite as below:
2024-01-10 10:59:33.953038: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:378] Ignored output_format.
2024-01-10 10:59:33.953063: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:381] Ignored drop_control_dependency.
Summary on the non-converted ops:
---------------------------------
* Accepted dialects: tfl, builtin, func
* Non-Converted Ops: 17122, Total Ops 34728, % non-converted = 49.30 %
* 17122 ARITH ops
- arith.constant: 17122 occurrences (f32: 17109, i32: 13)
(f32: 99)
(f32: 24)
(f32: 17001)
(f32: 73)
(f32: 45)
(uq_8: 70)
(f32: 7)
(f32: 1)
(f32: 5)
(f32: 97)
(f32: 1)
(f32: 180)
2024-01-10 10:59:39.877035: I tensorflow/compiler/mlir/lite/flatbuffer_export.cc:2989] Estimated count of arithmetic ops: 9.246 G ops, equivalently 4.623 G MACs
Now I can convert the model-small.onnx to tflite, but the result images are blurry. Where was I doing wrong? Please help me~
import onnx
from onnx import helper
onnx_model_path = "model-small.onnx" onnx_model = onnx.load(onnx_model_path)
name_map = {"0": "arg_0"}
new_inputs = []
for inp in onnx_model.graph.input: if inp.name in name_map:
new_inp = helper.make_tensor_value_info(name_map[inp.name],
inp.type.tensor_type.elem_type,
[dim.dim_value for dim in inp.type.tensor_type.shape.dim])
new_inputs.append(new_inp)
else:
new_inputs.append(inp)
onnx_model.graph.ClearField("input") onnx_model.graph.input.extend(new_inputs)
for node in onnx_model.graph.node: for i, input_name in enumerate(node.input): if input_name in name_map: node.input[i] = name_map[input_name]
onnx_model_path = "model-small-fix.onnx" onnx.save(onnx_model, onnx_model_path)
2. Convert it into TensorFlow saved model format (the result of the TF model is confirmed OK):
```python
import onnx
from onnx_tf.backend import prepare
model_path = "model-small-fix.onnx"
output_path = "modified_model_2"
onnx_model = onnx.load(model_path) # load onnx model
tf_rep = prepare(onnx_model) # prepare tf representation
tf_rep.export_graph(output_path) # export the model
import tensorflow as tf
import io
path = 'modified_model_2'
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir=path)
tf_lite_model = converter.convert()
open('model_1.tflite', 'wb').write(tf_lite_model)
python convert_tf2tflite_savedmodel.py
2024-01-10 14:26:37.071501: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:378] Ignored output_format.
2024-01-10 14:26:37.071524: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:381] Ignored drop_control_dependency.
2024-01-10 14:26:37.072913: I tensorflow/cc/saved_model/reader.cc:83] Reading SavedModel from: /modified_model_2
2024-01-10 14:26:37.089615: I tensorflow/cc/saved_model/reader.cc:51] Reading meta graph with tags { serve }
2024-01-10 14:26:37.089642: I tensorflow/cc/saved_model/reader.cc:146] Reading SavedModel debug info (if present) from: /modified_model_2
2024-01-10 14:26:37.110969: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled
2024-01-10 14:26:37.118290: I tensorflow/cc/saved_model/loader.cc:233] Restoring SavedModel bundle.
2024-01-10 14:26:37.263435: I tensorflow/cc/saved_model/loader.cc:217] Running initialization op on SavedModel bundle at path: /modified_model_2
2024-01-10 14:26:37.389643: I tensorflow/cc/saved_model/loader.cc:316] SavedModel load for tags { serve }; Status: success: OK. Took 316731 microseconds.
2024-01-10 14:26:37.545668: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
Summary on the non-converted ops:
---------------------------------
* Accepted dialects: tfl, builtin, func
* Non-Converted Ops: 293, Total Ops 804, % non-converted = 36.44 %
* 293 ARITH ops
- arith.constant: 293 occurrences (f32: 281, i32: 12)
(f32: 99)
(f32: 73)
(f32: 24)
(f32: 73)
(f32: 45)
(f32: 7)
(f32: 1)
(f32: 5)
(f32: 1)
(f32: 180)
2024-01-10 14:26:38.047603: I tensorflow/compiler/mlir/lite/flatbuffer_export.cc:2989] Estimated count of arithmetic ops: 9.246 G ops, equivalently 4.623 G MACs
I found that the input shape (1, 3,256,256) in TensorFlow Lite is different from the MiDaS provided model (1,256,256,3)
I use the 'onnx2tf' that it automatically transposes (N,C,H,W) to (N,H,W,C) So I will close this issue.
hi @TimYao18 ,I follw your steps and the test results are normal, but also encountered this problem on mobile device, hope you can share how to solve it, thanks~
Hi, I want to try to quantize the model from MiDaS 2.1 into TFLite. The model I converted get wrong result on Mobile, but correct result in Python.
I download the MiDaS ONNX model and then run below python codes:
onnx_model_path = "model-small.onnx" onnx_model = onnx.load(onnx_model_path)
Define a mapping from old names to new names
name_map = {"0": "arg_0"}
Initialize a list to hold the new inputs
new_inputs = []
Iterate over the inputs and change their names if needed
for inp in onnx_model.graph.input: if inp.name in name_map:
Create a new ValueInfoProto with the new name
Clear the old inputs and add the new ones
onnx_model.graph.ClearField("input") onnx_model.graph.input.extend(new_inputs)
Go through all nodes in the model and replace the old input name with the new one
for node in onnx_model.graph.node: for i, input_name in enumerate(node.input): if input_name in name_map: node.input[i] = name_map[input_name]
Save the renamed ONNX model
onnx_model_path = "model-small-fix.onnx" onnx.save(onnx_model, onnx_model_path)
mobile/ios
folder of current project. It blurs any scene and seems not to be run on NPU (Core ML) delegate.Can anyone help me, how to convert the MiDaS 2.1 model from any one to TFLite?