PINTO0309 / openvino2tensorflow

This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support.
MIT License
338 stars 40 forks source link

Trouble converting vehicle-detection-0200 #79

Closed hesamsheikh closed 2 years ago

hesamsheikh commented 2 years ago

I'm converting the openvino vehicle-detection-0200 model. Though the last DetectionOutput layer is not implemented. I changed the xml to this:

I removed the last two layers of the original model and didn't change the .bin weight file. I give the new xml to openvino2tensorflow.py and it runs with no errors. Though the model it gives me only has the input layer and is less than 100kb. Did I do something wrong? cp.txt

PINTO0309 commented 2 years ago

Perhaps you are editing it wrong. I can't read anything that looks like the xml file you posted, but I guess you just haven't reconnected the Result layer.

hesamsheikh commented 2 years ago

I replaced the xml text with txt file you can view.

PINTO0309 commented 2 years ago

There is no Result layer. Screenshot 2021-11-17 22:34:56 Screenshot 2021-11-17 22:35:07

PINTO0309 commented 2 years ago

vehicle-detection-0200.xml.zip aaa

PINTO0309 commented 2 years ago

https://github.com/PINTO0309/PINTO_model_zoo/tree/main/178_vehicle-detection-0200

PINTO0309 commented 2 years ago

https://github.com/PINTO0309/openvino2tensorflow/releases/tag/v1.25.3 https://github.com/PINTO0309/openvino2tensorflow/releases/tag/v1.25.4

hesamsheikh commented 2 years ago

https://github.com/PINTO0309/PINTO_model_zoo/tree/main/178_vehicle-detection-0200

Thanks for the effort. Though it seems that the saved_model format you have provided has some problems. It's in the ''_UserObject' object' format and normal functions like predict and summary cannot be applied to it.

PINTO0309 commented 2 years ago
>>> import tensorflow as tf
>>> import numpy as np
>>> loaded = tf.saved_model.load('saved_model_256x256')
>>> infer = loaded.signatures["serving_default"]
>>> infer(image=np.ones((1,256,256,3),dtype=np.float32))
2021-11-20 21:36:29.388833: I tensorflow/stream_executor/cuda/cuda_dnn.cc:369] Loaded cuDNN version 8200
{'tf.identity': <tf.Tensor: shape=(1, 5376), dtype=float32, numpy=
array([[-0.6882432 , -0.92401135,  0.3261156 , ..., -0.20971192,
        -0.18629664, -0.2545644 ]], dtype=float32)>, 'tf.identity_1': <tf.Tensor: shape=(1, 2688), dtype=float32, numpy=
array([[3.9097137e-04, 9.9960905e-01, 8.8063441e-04, ..., 9.9621165e-01,
        3.4444174e-03, 9.9655557e-01]], dtype=float32)>}
$ saved_model_cli show --dir saved_model_256x256/ --all

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['__saved_model_init_op']:
  The given SavedModel SignatureDef contains the following input(s):
  The given SavedModel SignatureDef contains the following output(s):
    outputs['__saved_model_init_op'] tensor_info:
        dtype: DT_INVALID
        shape: unknown_rank
        name: NoOp
  Method name is: 

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['image'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 256, 256, 3)
        name: serving_default_image:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['tf.identity'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 5376)
        name: StatefulPartitionedCall:0
    outputs['tf.identity_1'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 2688)
        name: StatefulPartitionedCall:1
  Method name is: tensorflow/serving/predict

Defined Functions:
  Function Name: '__call__'
    Option #1
      Callable with:
        Argument #1
          inputs: TensorSpec(shape=(1, 256, 256, 3), dtype=tf.float32, name='inputs')
        Argument #2
          DType: bool
          Value: True
        Argument #3
          DType: NoneType
          Value: None
    Option #2
      Callable with:
        Argument #1
          image: TensorSpec(shape=(1, 256, 256, 3), dtype=tf.float32, name='image')
        Argument #2
          DType: bool
          Value: True
        Argument #3
          DType: NoneType
          Value: None
    Option #3
      Callable with:
        Argument #1
          inputs: TensorSpec(shape=(1, 256, 256, 3), dtype=tf.float32, name='inputs')
        Argument #2
          DType: bool
          Value: False
        Argument #3
          DType: NoneType
          Value: None
    Option #4
      Callable with:
        Argument #1
          image: TensorSpec(shape=(1, 256, 256, 3), dtype=tf.float32, name='image')
        Argument #2
          DType: bool
          Value: False
        Argument #3
          DType: NoneType
          Value: None

  Function Name: '_default_save_signature'
    Option #1
      Callable with:
        Argument #1
          image: TensorSpec(shape=(1, 256, 256, 3), dtype=tf.float32, name='image')

  Function Name: 'call_and_return_all_conditional_losses'
    Option #1
      Callable with:
        Argument #1
          inputs: TensorSpec(shape=(1, 256, 256, 3), dtype=tf.float32, name='inputs')
        Argument #2
          DType: bool
          Value: False
        Argument #3
          DType: NoneType
          Value: None
    Option #2
      Callable with:
        Argument #1
          image: TensorSpec(shape=(1, 256, 256, 3), dtype=tf.float32, name='image')
        Argument #2
          DType: bool
          Value: True
        Argument #3
          DType: NoneType
          Value: None
    Option #3
      Callable with:
        Argument #1
          image: TensorSpec(shape=(1, 256, 256, 3), dtype=tf.float32, name='image')
        Argument #2
          DType: bool
          Value: False
        Argument #3
          DType: NoneType
          Value: None
    Option #4
      Callable with:
        Argument #1
          inputs: TensorSpec(shape=(1, 256, 256, 3), dtype=tf.float32, name='inputs')
        Argument #2
          DType: bool
          Value: True
        Argument #3
          DType: NoneType
          Value: None
hesamsheikh commented 2 years ago

Method name is: tensorflow/serving/predict

Thanks. Why doesn't this saved_model not have ordinary attributes of a tensorflow model? Things like summary, train, compile... Is there a workaround for retraining this model?

hesamsheikh commented 2 years ago

I fixed the problem by converting the model to h5. It seems to be okay.

hesamsheikh commented 2 years ago

Is the DetectionOutput layer going to be implemented anytime soon? The current workaround puts considerable calculations on CPU.

Thanks @PINTO0309