Closed oniigiirii closed 1 year ago
It was a mistake on my part. I built the static library using the -DTF_LITE_STATIC_MEMORY
flag but forgot to include the flag during the building of my project. As a result, the wrong TfLiteTensor
struct was being used at runtime. Which caused the outputs and types to be completely false.
Hello
I am trying to run tflite micro on a development board running an Xtensa hifi3 dsp. I compiled tflite micro with xtensa optimized kernels using my Xtensa Compiler with the following make command:
make -f tensorflow/lite/micro/tools/make/Makefile TARGET=xtensa TARGET_ARCH=hifi4 OPTIMIZED_KERNEL_DIR=xtensa XTENSA_TOOLS_VERSION=RG-2018.9-linux
For testing purposes I trained a very simple model that approximates sine values in the range [0, 2*pi]. Due to the fact that the dsp I am using does not support floating point operations I quantized the model using the tflite converter such that it only uses int8 types for all computations.
Running the tflite analyzer on my model using
tf.lite.experimental.Analyzer.analyze(q_sine_model)
yields the following:indicating that the quantized model does indeed expect int8 inputs and outputs.
I then used the unix tool xxd to produce the attached tflite micro model (as a header file) model.h
When I run the model on my microcontroller however, I get results that do not match the quantized tflite model. Furthermore inspecting
interpreter->input(0)->type
shows that it is equal tokTFLiteFloat32
and notkTFLiteInt8
as would be expected.Can someone explain why this is happening? Thanks in advance!