mit-han-lab / tinyengine

[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory
https://mcunet.mit.edu
MIT License
810 stars 131 forks source link

Platform-independent operation #61

Open lzlwakeup opened 1 year ago

lzlwakeup commented 1 year ago

Hi team, Some questions are bothering me. When I use code generation, arm dependent code is automatically generated, "depthwise_kernel3x3_stride1_inplace_CHW_fpreq.c", "#include "arm_nnsupportfunctions.h" //TODO: remove this in the future for self-contained" This header file is included NN component of CMSIS, and generated code contains these arm dependencies. If I'm testing a demo on windows or linux, this bothers me, because I need to build a simulation environment to test. I clearly know that using third-party libraries will speed up operational calculations, but I want to keep things simple. So is there an implementation that platform independent or does not require third-party library?

I am looking forward to your reply.

meenchen commented 1 year ago

Hi @lzlwakeup,

TinyEngine is originally target for ARMv7E-M ISA, but we are aware of the importance of platform-independent implementation for the potentials to support different ISAs. We already have this in our long-term road map, so please stay tuned!