Closed cannnnnnnnnnnn closed 1 year ago
Hello @cannnnnnnnnnnn the model which I have embedded is a model which was present in tflite-micro repo since the beginning of the development, (which was previously part of TensorFlow/) repo.
How did you quantise the model? The script above shares steps till generation of model in float32 format. The example assumes the model is int8
quantised and hence input and output to it is int8 converted back to float.
If you didn't convert the float32 model to int8, do consider providing input and interpreting the output as float32 and not int8. You may find the script here. Hope this helps.
Update: After running the above script to convert the model to int8 format and then converting the model to cc format, I get a model here, which when used to replaced existing model, gives correct results as expected. Attached the model for your reference: model_data.cc.txt
I (293) heap_init: At 4008B444 len 00014BBC (82 KiB): IRAM
I (300) spi_flash: detected chip: generic
I (304) spi_flash: flash io: qio
W (308) spi_flash: Detected size(4096k) larger than the size in the binary image header(2048k). Using the size in the binary image header.
I (321) app_start: Starting scheduler on CPU0
I (326) app_start: Starting scheduler on CPU1
I (326) main_task: Started on CPU0
I (336) main_task: Calling app_main()
x_value: 0.000000, y_value: -0.008100
x_value: 0.314159, y_value: 0.315903
x_value: 0.628319, y_value: 0.607506
x_value: 0.942478, y_value: 0.769507
x_value: 1.256637, y_value: 0.947709
x_value: 1.570796, y_value: 0.980109
x_value: 1.884956, y_value: 0.923409
x_value: 2.199115, y_value: 0.818108
x_value: 2.513274, y_value: 0.542705
x_value: 2.827433, y_value: 0.275403
x_value: 3.141593, y_value: 0.008100
x_value: 3.455752, y_value: -0.283503
感谢您的解答,我根据您给的解决思路已经将模型成功在设备上跑通了
编译hello_world例程运行都没有问题,但是一旦将模型数据替换为官方例程的模型数据就导致输出的数据全为零 请问一下能不能将模型训练的脚本文件添加至项目文件夹中