Closed Black3rror closed 7 months ago
Have a look at https://github.com/tensorflow/tflite-micro/issues/2444. In essence compile a static tflite micro lib and link to your application. Some documentation: https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/cortex_m_generic/README.md https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/kernels/cmsis_nn/README.md
"This issue is being marked as stale due to inactivity. Remove label or comment to prevent closure in 5 days."
"This issue is being closed because it has been marked as stale for 5 days with no further activity."
Thank you @mansnils for answering. I've looked at the links, but my questions remain. Mainly, my questions are:
I want to create source files, not a static library. How can I do that?\ Probably the answer is create_tflm_tree.py. If so, why this process is incomplete? It requires adding tensorflow/lite/array.h and tensorflow/lite/array.cc manually, also the include paths look a bit more complicated than what they should be (which is fine).
What if I want to have default implementations of kernels (for example, when I have a specific MCU that is not included in TFLM's supported MCUs - Let's say Renesas RX)? What should be the values of TARGET
and TARGET_ARCH
?
- I want to create source files, not a static library. How can I do that? Probably the answer is create_tflm_tree.py. If so, why this process is incomplete? It requires adding tensorflow/lite/array.h and tensorflow/lite/array.cc manually, also the include paths look a bit more complicated than what they should be (which is fine).
array.h and array.cc are not actually needed by TFLM. The create_tflm_tree script invokes the TFLM make build to determine which sources are needed, and since those aren't, they are not copied. If you are running into compilation issues after that, it is likely that you're missing a TF_LITE_STATIC_MEMORY define.
- What if I want to have default implementations of kernels (for example, when I have a specific MCU that is not included in TFLM's supported MCUs - Let's say Renesas RX)? What should be the values of
TARGET
andTARGET_ARCH
?
Omit the OPTIMIZED_KERNEL_DIR
make flag and you'll get the default implementation of kernels.
First, I like to discuss my way of using TFLM and make sure I'm following the correct path. Then, I will describe the problem that shows up using this approach.
Goal
To use TFLM in an arbitrary platform for a specific microcontroller. For example, let's say I have an ARM Cortex-M4 (the NUCLEO-L4R5ZI board) microcontroller that I want to program using STM32CubeIDE, PlatformIO, or any other programming platform.
How
By some investigation, I've found tensorflow/lite/micro/tools/make to be the main Makefile of the project, having the following as some of its important parameters:
TARGET
: defaults to the HOST_OS. A list of available targets can be found by following this pattern: tensorflow/lite/micro/tools/make/targets/\<TARGET>_makefile.incTARGET_ARCH
: defaults to HOST_ARCH. It seems to be important when having some targets like cortex_m_generic and doesn't matter with some others like bluepill. tensorflow\lite\micro\tools\make\targets\cortex_m_generic_makefile.inc lists available architectures for cortex_m_genericOPTIMIZED_KERNEL_DIR
: it is empty by default. cmsis_nn is a possible value (that should probably be working with all Cortex-M microcontrollers). Other values can be found by following this pattern: tensorflow/lite/micro/kernels/\<OPTIMIZED_KERNEL_DIR>BUILD_TYPE
: defaults to default. It can be one of {default, debug, release, release_with_logs, no_tf_lite_static_memory}. For more info check thisTOOLCHAIN
: defaults to gcc. It can be one of {gcc, armclang}. A specific version of the respective compiler will be downloaded for example in _tensorflow/lite/micro/tools/make/downloads/gccembedded.OPTIMIZED_KERNEL_DIR_PREFIX
,CO_PROCESSOR
,EXTERNAL_DIR
,DOWNLOADS_DIR
,KERNEL_OPTIMIZATION_LEVEL
,THIRD_PARTY_KERNEL_OPTIMIZATION_LEVEL
Still, to have a bag of C++ files that can be compiled and used in our C++ project, we need to use tensorflow/lite/micro/tools/project_generation/create_tflm_tree.py as described in this document. In short, this Python script will use the abovementioned Makefile to generate a file tree for our specific hardware. Then we can copy this generated folder to our C++ project (created by STM32CubeIDE, PIO, or any other platform), add TFLM-related code in our main function (a basic example can be seen in the generated hello_world example), and compile it along with our other parts of code.
Question 1
Am I doing it right? Is there a better and simpler way of using TFLM? Is TFLM supposed to be used this way?
Question 2
What should be the
TARGET
,TARGET_ARCH
, etc., when I want to use this project for other hardware like:xtensa
family, but I don't think any availableTARGET_ARCH
supports ESP32, and also it seems like some other parameters (e.g.,XTENSA_BASE
) should be defined that I'm not aware of how to do so.TARGET
categories (maybearc_custom
?). Generating the tflm-tree by leaving theTARGET
parameter empty (the command was executed on Google Colab, so I guess it ended up beingriscv32_generic
) and using the resulting tflm-tree in my project, surprisingly I was able to compile it and have TFLM on my Renesas board, but I'm pretty sure it shouldn't be the correct way. So, what should I do instead?Question/Problem 3
The process of putting the generated TFLM library in our existing C++ project looks more problematic than it should be. For example:
Also, add the following if we are using CMSIS_NN:
(Based on our target, we might need to include more paths) Also, we might need to define
CMSIS_NN
if we are using it.Is there any solution to this hassle? At least some documentation?