Open Criminal-9527 opened 1 month ago
@Criminal-9527 the tflite-micro
models are expected to use quantised weights (inputs/outputs and filter values) and only the scales are supposed to be float.
To make sure your model can work with tflite-micro framework, you need to quantise it to use int8. This is a simple process and can be achieved with small piece of code after you train the model. Example can be found here: https://ai.google.dev/edge/litert/models/post_training_integer_quant#convert_using_integer-only_quantization
Checklist
Issue or Suggestion Description
我们在使用模型检测时出现了如下报错: input->type == kTfLiteInt8 || Int16 || Uint8 was not true,Node DEQUANTIZE (number 0f) failed to prepare with status 1。 我们使用的idf版本是v5.2,板子是esp32s3-wroom,使用的模型是mediapipe的hand_landmark_lite.tflite模型,在netron上查看其结构如下: 我已经按照netron将模型所需op全部加入resolver 这个DEQUANTIZE量化操作的输入并不是我在手动加模型输入的时候能够决定的呀,为什么会导致这样的报错呢,难道是esp-tflite-micro的这个op与模型的同名op之间有差别吗? 完整代码有点长,我们想做实时识别手势,所以还有摄像头的好多代码,下面是只有有关模型的代码,hand_landmark_lite.tflite这个模型在电脑上是确定可以运行的
感谢!