Farzinkh / AI_EDGE

Deploy deep neural network models on esp32 SOC.
Apache License 2.0
8 stars 0 forks source link

ESP-DL ONNX Conversion for quantization tool #2

Open MichelBerg opened 5 months ago

MichelBerg commented 5 months ago

Hello Frazinkh,

I am also a student who is interested int ML and embedded AI. I found your repo while browsing for ESP-DL stuff. Furthermore, I want to compare ESP-DL TVM with the quantization tool, but it is harder than I thought. At the moment I am facing some problems with the quantization tool from ESP and I wonder if you could help me out.

In one of your colab scripts, I found the following, could you explain to me why? ## Second way (not working with ESP_DL optimizer)

Maybe we could get in contact. I would be very happy to receive a reply.

Farzinkh commented 5 months ago

Hello, Michel. I would be very happy if I can help you out with this problem. In my last experience with ESP-DL TVM, I found it extremely buggy and absolutely unstable, so I stopped working on it.

Regarding my colab script, when I tried to extract the model to ONNX format using the tf2onnx library, I got several errors for the bad architecture of the model. However, using it as a tool (First way) did the trick and solved the errors.

Yes, we could be in contact. Email me your social media ID.