mit-han-lab / tinyengine

[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory
https://mcunet.mit.edu
MIT License
792 stars 130 forks source link

Conversion of FC Layers and Conv Layers #12

Closed nnizh131 closed 1 year ago

nnizh131 commented 1 year ago

Thanks for the great work. Unfortunately, as of now the library only works for mcunet models. The support for custom models is not completely implemented. For example, to convert a model with fully connected layers. The code generates Conv operation. _convert_FULLY_CONNECTED located in TfliteConvertor.py returns wrong operations. def _convert_FULLY_CONNECTED(self, op): ....... op = conv2d.Conv2d(params) return op Also, codegen only generates depthwise convolution header files with floating point quantization. Meaning that the genModel.c file might contain an operation, which is not yet defined. For instance, convolve_x_y_z_fpreq.h file is not generated. To resolve these issues, I think fc.py needs to be implemented inside operators folder and code templates need to be implemented for the convolution as well as fully connected layers. Are my objections correct? Are you planning to implement the missing files? If i decide to implement it myself, from where should i start?

meenchen commented 1 year ago

Hi @nnizh131, we replace fully connected layers with pointwise convolution during code generation as they can be interchangeable. Yet still, it is true that Tinyengine supports limited operators due to our limited bandwidth. To implement this, you will need to 1. implement .c in TinyEngine/src/kernels, 2. add the target operator inside code_generator/operators 2. update code_generator/TfliteConvertor.py to parse the operator hyperparameters and weights 3. update _parseTrainable inside code_generator/CodeGenerator.py for weights if any. You can take conv2d as an example of the process.

meenchen commented 1 year ago

Close due to inactivity. Feel free to reopen.