Closed aymanchaudhry closed 2 years ago
Hey aymanchaudhry
This is due to a quirk in how the Dequantize layers in the model store their data. Each of those Dequantize layers contain their own data to then be Dequantized. Which our 22.02 release is not equipped to handle as we don't support Const Tensors as inputs for Conv2d. But we do have a fix for it which will be in our upcoming 22.05 release. This should fix the error you are seeing above. 22.05 is expected to be released on the 27th of May.
Regards, David
Hi David,
Thanks for letting me know!
Kind Regards, Ayman
Hey Ayman,
Can you let me know if the 22.05 release fixed your issue, otherwise I'll be closing this off as fixed.
Kind regards, David
Hi David,
I have not had a chance to upgrade to the latest release (v22.05) yet.
Please feel free to close this issue and I'll re-open if I face any issues when I get a chance to upgrade and test.
Thanks for the help!
Kind Regards, Ayman
Hi, I'm currently using ArmNN v22.02 alongside TensorFlow Lite v2.5.3. I've tried to run ExecuteNetwork on a float32 Blaze Pose model, however, I receive the following error:
Using Netron, it looks as if the model uses the Weights and Biases as inputs to the Conv2d layer.![blaze-pose-lite](https://user-images.githubusercontent.com/78209398/170076025-f0259113-7134-4a02-bef2-8ca91fb88a22.png)
I have seen there has been similar issue reported which have been closed due to it being put as a feature request - #506 .
Has this feature been implemented, or will it be included in an upcoming Arm NN release?
Model sourced from: https://github.com/google/mediapipe/blob/master/mediapipe/modules/pose_landmark/pose_landmark_lite.tflite
Kind Regards, Ayman