ARM-software / armnn

Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn
https://developer.arm.com/products/processors/machine-learning/arm-nn
MIT License
1.17k stars 309 forks source link

Operator CUSTOM [32] is not supported by armnn_delegate #662

Closed liamsun2019 closed 2 years ago

liamsun2019 commented 2 years ago

I am getting an error when running objectdetection sample code utilizing:

LD_LIBRARY_PATH=/home2/liam/ArmNN-linux-x86_64/:/home2/liam/armnn-devenv/armnn/samples/ObjectDetection/build/lib/ ./object_detection_example --label-path /home2/liam/armnn-devenv/ML-zoo/models/object_detection/ssd_mobilenet_v1/tflite_int8/labelmapping.txt --video-file-path test.mp4 --model-file-path /home2/liam/armnn-devenv/ML-zoo/models/object_detection/ssd_mobilenet_v1/tflite_int8/ssd_mobilenet_v1.tflite --model-name SSD_MOBILE --output-video-file-path result.mp4 --preferred-backends CpuRef

ERROR: Operator CUSTOM [32] is not supported by armnn_delegate

where, ArmNN-linux-x86_64 comes from https://github.com/ARM-software/armnn/releases/ 22.05 version, ssd_mobilenet_v1.tflite comes from https://github.com/ARM-software/ML-zoo. I thought this model was supposed to be verified but my test did not show that.

Please refer to the attachment for model and label file. Thanks.

test.zip

catcor01 commented 2 years ago

Hi @liamsun2019,

Currently, in the armnn_delegate we support 2 custom operators, AveragePool3D and MaxPool3D. These operators are supported in TensorFlow versions post 2.5 as move to upgrading TensorFlow to v2.8 in ArmNN.

The custom operator being used in ssd_mobilenet_v1 is TFLite_Detection_PostProcess which is not supported by armnn_delegate. Therefore, we do not fully support the ssd_mobilenet_v1 model with the delegate. However, the model should still fully run with the unsupported custom operator having to fallback to Google's TF Lite Runtime implementation. Our TensorFlow Parser does fully support the ssd_mobilenet_v1model.

Kind Regards, Cathal.

liamsun2019 commented 2 years ago

Hi @catcor01, Thanks for your comment. Per your suggestions, my understanding is two options are applicable: a. Utilize tflite parser c++ APIs to do the inference for this model. b. Use tflite runtime, i.e, purely python APIs.

BTW, is there any way that I could locate the custom unsurpported OPs?

catcor01 commented 2 years ago

The third option is what you have done already. Using the armnn_delegate with only the unsupported operator falling back to tflite runtime (all other ops running with ArmNN). We generally recommend and are making a push to using the delegate over the parser in ArmNN. I will create a ticket to add TFLite_Detection_PostProcess operator to the delegate which will hopefully get implemented in the near future.

Our documentation does not specify which operations are not supported. However, you can find armnn_delegate operators which are supported here. Unfortunately, the documentation does not specify which operations are custom and you would therefore need to look into our codebase here to see which custom operators are supported in the delegate.

Kind Regards, Cathal.

catcor01 commented 2 years ago

Hi @liamsun2019,

I've added a ticket for adding the operators to our backlog however there is no ETA at the moment when they will be implemented. As we're going to track these internally, I'm going to close this issue but we'll update it if and when support for these operators have been added.

If you would like to help implement these operators then here is a link to our contributors guide which will show you how you can submit a patch to us. If you have any questions then please feel free to create a new Issue, thank you!

https://www.mlplatform.org/contributing

Kind Regards, Cathal.

liamsun2019 commented 2 years ago

Hi @catcor01, Big thanks for your helpful suggestions. Looking forward to your implementation in the near future.

B.R Liam