google-coral / edgetpu

Coral issue tracker (and legacy Edge TPU API source)
https://coral.ai
Apache License 2.0
429 stars 125 forks source link

Issue while converting ONNX to TFlite. #830

Open ChaitraSaiK opened 8 months ago

ChaitraSaiK commented 8 months ago

I am trying to convert Pytorch model to ONNX and then to TFlite to deploy onto a device.

This is the code

def export_to_onnx(self, output_path,opset_version=12): self.model.eval() dummy_input = torch.randn((1,6,2)).to(self.device) torch.onnx.export(self.model, dummy_input, output_path, verbose=True, opset_version= opset_version ) print(f"Model exported to ONNX format")

def export_to_tflite(self, onnx_model_path, tflite_model_path):
    onnx_model = onnx.load(onnx_model_path)
    tf_rep = prepare(onnx_model, gen_tensor_dict=True)
    print(f'inputs: {tf_rep.inputs}')
    print(f'outputs: {tf_rep.outputs}')
    print(f'tensor_dict: {tf_rep.tensor_dict}')
    tf_rep.export_graph(tflite_model_path)

    saved_model_path = tflite_model_path
    #tensorflowlite_path = os.path.join(tflite_model_path, 'model.tflite')
    converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
    tensorflowlite_path = "model_opt.tflite"
    converter.experimental_options = {"emit_select_tf_ops": True}
    converter.allow_custom_ops = True

    # converter = tf.compat.v1.lite.TFLiteConverter.from_saved_model(tflite_model_path)
    tflite_model = converter.convert()
    with open(tensorflowlite_path, 'wb') as f:
        f.write(tflite_model)  

     Before adding converter.allow_custom_ops = True, I used to get this error

image

loc(callsite("while_2/All@_inference_while_2_body_5789_501" at fused["while_2@_inference_call__3587", "StatefulPartitioned Call/while_2"])): error: 'tf.All' op is neither a custom op nor a flex op error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag): tf.AddV2 (device = ""} tf.All (device = "", keep_dims = false) tf.Einsum (device = "", equation = "bhls,bshd->blhd"} tf.Erf (device = ""} tf.StridedSlice (begin_mask : 164, device = "", ellipsis_mask: 164, end_mask = 0: 164 = 0: 164, shrink_axis_mask = 1 : 164)0ps that need custom implementation (enabled via new_axis_mask setting the -emit-custom-ops flag): tf.TensorScatterUpdate (device = ""} ConverterError

So I enabled converter.allow_custom_ops = True, But when I try to do inference, I get this error:

 return self._interpreter.AllocateTensors()

RuntimeError: Encountered unresolved custom op: AddV2.Node number 0 (AddV2) failed to prepare. Node number 2 (WHILE) failed to prepare.

Can you help me with this issue?