Closed krn-sharma closed 1 year ago
Hi @krn-sharma
Can you clarify what the issue is here? From the screenshots attached the tool skips TF parser support (ITfParser) but not TFLite parser support (that would be ITfLiteParser).
Best regards, Mike
Ok, so I thought both are same as while running apt-cache search libarmnn
, in the output only tfliteparser is showing not tfparser.
In the guide, it state that installation from sudo apt get will install tensorflow lite parser.
But while running sample example of pyarmnn, it say it does not recognize from_output_tensor.
output, output_tensor_info = ann.from_output_tensor(output_tensors[0][1])
Hi @krn-sharma,
The guide you linked does not install TfParser as we have not packaged it due to a lack of suitable libprotobuf dependency.
from_output_tensor() method is no longer available in PyArmNN.
Instead could you try using:
results = ann.workload_tensors_to_ndarray(output_tensors)
Regards, Francis.
Thanks for your help.
It is working now. I have one more question.
Does setting the backends with preferredBackends = [ann.BackendId('GpuAcc')]
make it run strictly only on Gpu?
Hi @krn-sharma,
Yes that's correct, you can prove it by passing an empty list of preferredBackends; it will complain about having nothing to run on.
Regards, Francis.
Great, thanks for the help.
Hello @FrancisMurtagh-arm, I am again facing some issues in pyarmnn APIs. The same script was working fine with version 25.0.0, but with 27.0.0, it is gaining error. Can you help me?
Your ArmNN library instance does not support Onnx models parser functionality. Skipped IOnnxParser import. Working with ARMNN 27.0.0
tensor id: 0, tensor info: TensorInfo{DataType: 1, IsQuantized: 0, QuantizationScale: 0.000000, QuantizationOffset: 0, IsConstant: 1, NumDimensions: 4, NumElements: 150528}
Traceback (most recent call last):
File "predict_pyarmnn.py", line 35, in
Hi @krn-sharma,
Just to confirm are you using the fire_detection.tflite provided in the example?
Are you running on a Raspberry Pi with Neon backend set as preferred?
I tried on my local x86 machine with Ubuntu 20.04 and couldn't reproduce:
python3 ./predict_pyarmnn.py --image fire.jpg
Your ArmNN library instance does not support Onnx models parser functionality. Skipped IOnnxParser import.
Working with ARMNN 27.0.0
(128, 128, 3)
tensor id: 15616,
tensor info: TensorInfo{DataType: 1, IsQuantized: 0, QuantizationScale: 0.000000, QuantizationOffset: 0, IsConstant: 1, NumDimensions: 4, NumElements: 49152}
Loaded network, id=0
Elapsed time is 107.8913549426943 ms
[array([[3.4735596e-08, 1.0000000e+00]], dtype=float32)]
Fire
Could you try applying the diff to the predict_pyarmnn.py and see does enabling the inferring of shapes help? diff.txt
Regards, Francis.
@FrancisMurtagh-arm Thanks for your reply.
fire_detection.tflite is working fine. I am running a imagenet trained tflite model.
I am running it on odroid xu4 with CpuAcc
I have created tflite model using following code.
I am getting new error after enabling the inferring of shape.
Hi @krn-sharma,
I created the above model with code provided, tihs is just an image from netron.app as it's too large to attach here. vgg16.tflite.png
I then altered the script to enable inferring shape as mentioned but also changed the opencv resizing to 224x224 to suit the model. diff2.txt
Then called the script on this image: fire.jpg
python3 ./predict_pyarmnn.py --image fire.jpg > output.txt
This is the resulting output: output.txt
Could you try with the above to see is there any difference?
Thanks, Francis.
Thank you for your solution, It worked. Now, I am facing a different issue with this model.
Your ArmNN library instance does not support Onnx models parser functionality. Skipped IOnnxParser import. Working with ARMNN 27.0.0
tensor id: 0, tensor info: TensorInfo{DataType: 1, IsQuantized: 0, QuantizationScale: 0.000000, QuantizationOffset: 0, IsConstant: 1, NumDimensions: 4, NumElements: 307200}
Traceback (most recent call last):
File "predict_pyarmnn.py", line 92, in
Hi @krn-sharma
I only recently noticed the problem you were having in your last comment here, there was a bug while converting constants wit Per-Axis quantizations as used in that model. You've probably moved on but if not can you check if the fix works for you?
https://review.mlplatform.org/c/ml/armnn/+/8824
If you have any issues please let me know.
Best regards, Mike.
Closing due to lack of activity, please reopen if the issue persists.
Regards, Francis.
I need tflite parser.
I have installed pyarmnn using this guide
It's an odroid board Ubuntu version- 20.04 CPU info: