mit-han-lab / tinyengine

[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory
https://mcunet.mit.edu
MIT License
792 stars 130 forks source link

Questions on VWW Inference Tutorial in STM Cube IDE Project #87

Closed gunjupark closed 1 year ago

gunjupark commented 1 year ago

Hi, I have a few questions about performing inference on mcu in real-world. (Tinyengine's inference tutorial using STM CUBE IDE)

First, I found input data processing code in STM CUBE project's main.cpp. From what I understand, the code directly passes a signed char 'input' buffer to the model.

I have a question regarding the models provided by the model_zoo. It seems that Model Zoo's Torch models and TFLite models are trained with Torch's toTensor() and normalize() preprocessing. However, I couldn't find such preprocessing steps in the tutorial code(STM Cube project codes), so I'm curious if the models used in the tutorial (codegen) were trained using a different approach.

Secondly, I fount the torch2tflite converting method in this repository's issues (https://github.com/mit-han-lab/tinyengine/issues/6). So, I attempted to convert the torch model to a tflite model using Alibaba's tinyNN Tool.

However, I noticed some differences in tflite operators between the tflite model I converted and the one in your model_zoo. Do you use any other approaches for converting Torch2TFLite?

Thank you.

gunjupark commented 1 year ago

My first question has been resolved. I checked that normalization has been fused in quantization step. https://github.com/alibaba/TinyNeuralNetwork/blob/main/docs/FAQ.md

but I couldn't know how to cast uint8 to int8 in torch's data loader transform step. How could I train with signed RGB value with torch's normalization? Thank you