asus4 / tf-lite-unity-sample

TensorFlow Lite Samples on Unity
834 stars 249 forks source link

Custom model in SSD throws exception: TensorFlowLite operation failed #312

Closed Pypow closed 10 months ago

Pypow commented 10 months ago

Environment

Bug: After replacing the default model and label in SSD scene, with custom model (neutron screenshot below), I am getting the below exception:

Exception: TensorFlowLite operation failed.
TensorFlowLite.Interpreter.ThrowIfError (TensorFlowLite.Interpreter+Status status) (at Packages/com.github.asus4.tflite/Runtime/Interpreter.cs:222)
TensorFlowLite.Interpreter.GetOutputTensorData (System.Int32 outputTensorIndex, System.Array outputTensorData) (at Packages/com.github.asus4.tflite/Runtime/Interpreter.cs:159)
TensorFlowLite.SSD.Invoke (UnityEngine.Texture inputTex) (at Assets/Samples/SSD/SSD.cs:63)
SsdSample.Invoke (UnityEngine.Texture texture) (at Assets/Samples/SSD/SsdSample.cs:77)
UnityEngine.Events.InvokableCall`1[T1].Invoke (T1 args0) (at <e8a406da998549af9a2680936c7da25a>:0)
UnityEngine.Events.UnityEvent`1[T0].Invoke (T0 arg0) (at <e8a406da998549af9a2680936c7da25a>:0)
TensorFlowLite.WebCamInput.Update () (at Packages/com.github.asus4.tflite.common/Runtime/WebCamInput.cs:75)

The model I have is built from custom dataset and is having input : uint8 and output : float32 image

This issue seems similar to #178, however I am getting empty output data structure and I am using windows OS

Edit 1: I have made changes in the Python and C# code to match the max_detections = 10, still I am getting same exception

Version: 2.12.0

Input [0]: name: serving_default_images:0, type: UInt8, dimensions: [1,320,320,3], quantizationParams: {scale: 0.0078125 zeroPoint: 127}

Output [0]: name: StatefulPartitionedCall:1, type: Float32, dimensions: [], quantizationParams: {scale: 0 zeroPoint: 0}
Output [1]: name: StatefulPartitionedCall:3, type: Float32, dimensions: [], quantizationParams: {scale: 0 zeroPoint: 0}
Output [2]: name: StatefulPartitionedCall:0, type: Float32, dimensions: [], quantizationParams: {scale: 0 zeroPoint: 0}
Output [3]: name: StatefulPartitionedCall:2, type: Float32, dimensions: [], quantizationParams: {scale: 0 zeroPoint: 0}

Additional context The error is being thrown in this GetOutputTensorData() in Interpretor.cs image

Model: Edit 1: updated model Model.zip

nitish11 commented 10 months ago

@asus4 Thanks for the great work.

I am also facing the same issue in using custom model. Could you please guide if there is any known issue or limitation.

asus4 commented 10 months ago

Hi @Pypow @nitish11

  1. Your model looks like a dynamic shape model. I recommend turning off the option since the GPU delegate does not support the dynamic shape.
  2. I needed to modify the output order in the SSD.cs to match your model. Please find below for the detail↓
  3. But the accuracy seemed low. Please confirm if the input tensor requires scaled/offset.

Here's what I've changed and confirmed working without exception:

Screen shot: Screenshot 2023-08-31 at 18 16 28

Patch file: tf-lite-unity-sample-18-16-15.patch

Pypow commented 10 months ago

Hi @asus4 Thank you for the clear explanation, worked like a charm!

Closing the issue.