ARM-software / armnn

Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn
https://developer.arm.com/products/processors/machine-learning/arm-nn
MIT License
1.17k stars 310 forks source link

Pyarmnn skipping TFLite parser import #543

Closed krn-sharma closed 1 year ago

krn-sharma commented 3 years ago

image

I need tflite parser.

I have installed pyarmnn using this guide

It's an odroid board Ubuntu version- 20.04 CPU info: image

MikeJKelly commented 3 years ago

Hi @krn-sharma

Can you clarify what the issue is here? From the screenshots attached the tool skips TF parser support (ITfParser) but not TFLite parser support (that would be ITfLiteParser).

Best regards, Mike

krn-sharma commented 3 years ago

Ok, so I thought both are same as while running apt-cache search libarmnn, in the output only tfliteparser is showing not tfparser. In the guide, it state that installation from sudo apt get will install tensorflow lite parser.

But while running sample example of pyarmnn, it say it does not recognize from_output_tensor. output, output_tensor_info = ann.from_output_tensor(output_tensors[0][1])

FrancisMurtagh-arm commented 3 years ago

Hi @krn-sharma,

The guide you linked does not install TfParser as we have not packaged it due to a lack of suitable libprotobuf dependency.

from_output_tensor() method is no longer available in PyArmNN.

Instead could you try using: results = ann.workload_tensors_to_ndarray(output_tensors)

Regards, Francis.

krn-sharma commented 3 years ago

Thanks for your help. It is working now. I have one more question. Does setting the backends with preferredBackends = [ann.BackendId('GpuAcc')] make it run strictly only on Gpu?

FrancisMurtagh-arm commented 3 years ago

Hi @krn-sharma,

Yes that's correct, you can prove it by passing an empty list of preferredBackends; it will complain about having nothing to run on.

Regards, Francis.

krn-sharma commented 3 years ago

Great, thanks for the help.

krn-sharma commented 2 years ago

Hello @FrancisMurtagh-arm, I am again facing some issues in pyarmnn APIs. The same script was working fine with version 25.0.0, but with 27.0.0, it is gaining error. Can you help me?

Your ArmNN library instance does not support Onnx models parser functionality. Skipped IOnnxParser import. Working with ARMNN 27.0.0

tensor id: 0, tensor info: TensorInfo{DataType: 1, IsQuantized: 0, QuantizationScale: 0.000000, QuantizationOffset: 0, IsConstant: 1, NumDimensions: 4, NumElements: 150528}

Traceback (most recent call last): File "predict_pyarmnn.py", line 35, in opt_network, messages = ann.Optimize(network, preferredBackends, runtime.GetDeviceSpec(), ann.OptimizerOptions()) File "/usr/lib/python3/dist-packages/pyarmnn/_generated/pyarmnn.py", line 3678, in Optimize return _pyarmnn.Optimize(*args) RuntimeError: Unspecified dimension while using ShapeInferenceMethod::ValidateOnly

FrancisMurtagh-arm commented 2 years ago

Hi @krn-sharma,

Just to confirm are you using the fire_detection.tflite provided in the example?

Are you running on a Raspberry Pi with Neon backend set as preferred?

I tried on my local x86 machine with Ubuntu 20.04 and couldn't reproduce:

python3 ./predict_pyarmnn.py --image fire.jpg
Your ArmNN library instance does not support Onnx models parser functionality.  Skipped IOnnxParser import.
Working with ARMNN 27.0.0
(128, 128, 3)

tensor id: 15616, 
tensor info: TensorInfo{DataType: 1, IsQuantized: 0, QuantizationScale: 0.000000, QuantizationOffset: 0, IsConstant: 1, NumDimensions: 4, NumElements: 49152}

Loaded network, id=0
Elapsed time is  107.8913549426943 ms
[array([[3.4735596e-08, 1.0000000e+00]], dtype=float32)]
Fire

Could you try applying the diff to the predict_pyarmnn.py and see does enabling the inferring of shapes help? diff.txt

Regards, Francis.

krn-sharma commented 2 years ago

@FrancisMurtagh-arm Thanks for your reply.

  1. fire_detection.tflite is working fine. I am running a imagenet trained tflite model.

  2. I am running it on odroid xu4 with CpuAcc

  3. I have created tflite model using following code. image

  4. I am getting new error after enabling the inferring of shape.

image

FrancisMurtagh-arm commented 2 years ago

Hi @krn-sharma,

I created the above model with code provided, tihs is just an image from netron.app as it's too large to attach here. vgg16.tflite.png

I then altered the script to enable inferring shape as mentioned but also changed the opencv resizing to 224x224 to suit the model. diff2.txt

Then called the script on this image: fire.jpg

python3 ./predict_pyarmnn.py --image fire.jpg > output.txt

This is the resulting output: output.txt

Could you try with the above to see is there any difference?

Thanks, Francis.

krn-sharma commented 2 years ago

Thank you for your solution, It worked. Now, I am facing a different issue with this model.

Your ArmNN library instance does not support Onnx models parser functionality. Skipped IOnnxParser import. Working with ARMNN 27.0.0

tensor id: 0, tensor info: TensorInfo{DataType: 1, IsQuantized: 0, QuantizationScale: 0.000000, QuantizationOffset: 0, IsConstant: 1, NumDimensions: 4, NumElements: 307200}

Traceback (most recent call last): File "predict_pyarmnn.py", line 92, in opt_network, messages = ann.Optimize(network, preferredBackends, runtime.GetDeviceSpec(), optimizerOptions) File "/usr/lib/python3/dist-packages/pyarmnn/_generated/pyarmnn.py", line 3678, in Optimize return _pyarmnn.Optimize(*args) RuntimeError: Failed to assign a backend to each layer

MikeJKelly commented 1 year ago

Hi @krn-sharma

I only recently noticed the problem you were having in your last comment here, there was a bug while converting constants wit Per-Axis quantizations as used in that model. You've probably moved on but if not can you check if the fix works for you?

https://review.mlplatform.org/c/ml/armnn/+/8824

If you have any issues please let me know.

Best regards, Mike.

FrancisMurtagh-arm commented 1 year ago

Closing due to lack of activity, please reopen if the issue persists.

Regards, Francis.