Open michalber opened 2 years ago
Did you set the correct output exponent of each layer? The proper output exponent was given by the toolkit, printed in the terminal. Can you print a comparison between output and expectation?
I will answer for @michalber (we are the same team) - we did not set exponent. Calibrator from toolkit generated .cpp file with output_exponents filled for each layer, so we were right to assume these are correct ones.
Now I repeated this qunatization procedure and what I see is that indeed there are some prints in the console and output exponents there differ from those in the .cpp file.
It is nice that those .cpp and .hpp files are generated automatically, but it seems you have some bug that sets wrong output_exponent.
Hi all!
I have trouble with converting Tensorflow Lite model to ESP-DL model). To check if conversion is successful I have created simple 2 layer model with known weights and biases. Code to create this model is in
create_model.py
file. After this I have 2 new files:model.tflite
andtest_data.json
(which contains list of input and expected output values). Next I convert TFLite model to ONNX one. After this I am usingconvert_model_to_espdl.py
file to convert ONNX model to hpp/cpp ESP-DL files. After this process I am testing ESP-DL model on ESP32 with test data from JSON file and I can't get proper output values. Am I doing something wrong in TFLite -> ONNX conversion or in ONNX -> ESP-DL?In files I attached TFLite model, ONNX model and Python scripts used to create all files esp_dl_test.zip
Below is test code from ESP32. I tried setting
in_tensor
exponent value and changingin_3
data to represent float data with different exponents but nothing helped.Thanks for all help and replies!