mit-han-lab / tinyengine

[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory
https://mcunet.mit.edu
MIT License
792 stars 130 forks source link

No implemantation of convolve_1x1_s8_oddch_fpreq() #57

Open leaf82318 opened 1 year ago

leaf82318 commented 1 year ago

Hello team.

I found that there is no kernel of convolve_1x1_s8_oddch_fpreq(),

Would you mind upload this kernel of tiny engine?

leaf82318 commented 1 year ago

I was trying to deploy a customized model by myself. And when run the code generation .py file ,it generated the code with the kernel of convolve_1x1_s8_oddch_fpreq(), but i cannot find the function withith the tinyengine. So the kernel of convolve_1x1_s8_oddch_fpreq() is ready?

csjimmylu commented 1 year ago

Hi @leaf82318 ,

Were you able to run the code generation.py file with your own customized model for on-device training?

If yes, did the 3 files you used to compile your customized model came from:

  1. graph.json file came under the .model/testproj folder after you ran the compilation/ir2json.py from MIT's tiny-training repo?
  2. params.pkl file came under the .model/testproj folder after you ran the compilation/ir2json.py from MIT's tiny-training repo?
  3. scale.json file came under the _ir_zoos/proxylessquantize folder after you ran the _compilation/mcu_irgen.py from MIT's tiny-training repo?

Thanks!

leaf82318 commented 1 year ago

Hi @leaf82318 ,

Were you able to run the code generation.py file with your own customized model for on-device training?

If yes, did the 3 files you used to compile your customized model came from:

  1. graph.json file came under the .model/testproj folder after you ran the compilation/ir2json.py from MIT's tiny-training repo?
  2. params.pkl file came under the .model/testproj folder after you ran the compilation/ir2json.py from MIT's tiny-training repo?
  3. scale.json file came under the _ir_zoos/proxylessquantize folder after you ran the _compilation/mcu_irgen.py from MIT's tiny-training repo?

Thanks!

Thanks for your reply.

Recently, I just checked out the MIT's tinyengine repo, and run the vww.py with my own customized model under the folder of examples. I have not try the MIT's tiny-training repo and know less about it. I'll try.

meenchen commented 1 year ago

I was trying to deploy a customized model by myself. And when run the code generation .py file ,it generated the code with the kernel of convolve_1x1_s8_oddch_fpreq(), but i cannot find the function withith the tinyengine. So the kernel of convolve_1x1_s8_oddch_fpreq() is ready?

Hi @leaf82318,

Thanks for reaching out. The convolve_1x1_s8_oddch_fpreq kernel is not implemented yet, but we will add the support soon.