Open ImpulseHu opened 2 weeks ago
Hello @ImpulseHu the tflite-micro is compatible with v1.0 ops as well as 2.x. It is however, focused on int8 optimised versions. You should quantise the model using selective OPs. I would do some changes to your code to quantise the model. You may find the relevant code here: https://ai.google.dev/edge/litert/models/post_training_integer_quant#convert_using_integer-only_quantization
Once you quantise the model and embed it in the program, you can then pull the OPs used by the model. To know these OPs, you can take help of Netron visualiser or run the script to do this for you.
The model should now run without issues. If you want to manually do this, in your case, for example, you will at least need to add the SHAPE OP (shows in the error) additionally as follows:
// Pull in only the operation implementations we need.
static tflite::MicroMutableOpResolver<3> resolver; // 3 = Number of OPs to be registered
if (resolver.AddUnidirectionalSequenceLSTM() != kTfLiteOk) {
return;
}
if (resolver.AddFullyConnected() != kTfLiteOk) {
return;
}
if (resolver.AddShape() != kTfLiteOk) {
return;
}
Checklist
How often does this bug occurs?
always
Expected behavior
As the title says, i create a project and run it, but an error occurs.
By the way, i want to know, which version of TensorFlow was used to convert the tflite model in these examples[helloworld/micro_speech/person_detection]?
Actual behavior (suspected bug)
The following is the code snippet of ESP:
The following is the code snippet of Python to build model and save to tflite:
Error logs or terminal output
Steps to reproduce the behavior
The current environment is as follows: esp-idf: 5.2.0 esp-tflite-micro: 1.3.2 chip: esp32-s3 16MFlash and 8M PSRAM
python: 3.10.0 tensorflow: 2.16.1
Project release version
1.3.2
System architecture
Intel/AMD 64-bit (modern PC, older Mac)
Operating system
Linux
Operating system version
Window11
Shell
ZSH
Additional context
No response