Open finnfi opened 2 years ago
I've experiemented a little bit, and I get it to work for this tflite nn: test_q.zip. Where is the difference?
I fixed it by doing the quantization with the nntool and not in pythont with the tf package.
I then realized that the network input is way to big for the l2 memory, so it has to be redesigned.
See #339
Hi! I am currently working on the bitcraze AIDeck together with the crazyflie.
I've got a tflite nn that I want to execute on the gap8, but I get this error when compiling:
Error: For coefficient file ./BUILD_MODEL_SQ8BIT/tensors/S183_Mul_scale.tensor expecting 64 Byte items, got 1
This seems similar to this issue, but I did not understand the fix: #189
The console output is attached here: debug.txt The (prequantized) tflite model is attached here: depth_estimation_q.zip
GAP SDK version: 4.8.0. (Compatible with aideck)
I am very new to tensorflow and another person has designed the network and converted it to a tflite model. I need to get it on the gap8. Is this a problem I can fix with the nntool or must the network be redesigned?
Any help is appreciated!
Finn