tensorflow / tflite-micro

Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors).
Apache License 2.0
1.9k stars 816 forks source link

Generated tflm-tree is not sufficient #2472

Closed Black3rror closed 7 months ago

Black3rror commented 8 months ago

First, I like to discuss my way of using TFLM and make sure I'm following the correct path. Then, I will describe the problem that shows up using this approach.

Goal

To use TFLM in an arbitrary platform for a specific microcontroller. For example, let's say I have an ARM Cortex-M4 (the NUCLEO-L4R5ZI board) microcontroller that I want to program using STM32CubeIDE, PlatformIO, or any other programming platform.

How

By some investigation, I've found tensorflow/lite/micro/tools/make to be the main Makefile of the project, having the following as some of its important parameters:

Still, to have a bag of C++ files that can be compiled and used in our C++ project, we need to use tensorflow/lite/micro/tools/project_generation/create_tflm_tree.py as described in this document. In short, this Python script will use the abovementioned Makefile to generate a file tree for our specific hardware. Then we can copy this generated folder to our C++ project (created by STM32CubeIDE, PIO, or any other platform), add TFLM-related code in our main function (a basic example can be seen in the generated hello_world example), and compile it along with our other parts of code.

Question 1

Am I doing it right? Is there a better and simpler way of using TFLM? Is TFLM supposed to be used this way?

Question 2

What should be the TARGET, TARGET_ARCH, etc., when I want to use this project for other hardware like:

Question/Problem 3

The process of putting the generated TFLM library in our existing C++ project looks more problematic than it should be. For example:

Is there any solution to this hassle? At least some documentation?

mansnils commented 8 months ago

Have a look at https://github.com/tensorflow/tflite-micro/issues/2444. In essence compile a static tflite micro lib and link to your application. Some documentation: https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/cortex_m_generic/README.md https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/kernels/cmsis_nn/README.md

github-actions[bot] commented 7 months ago

"This issue is being marked as stale due to inactivity. Remove label or comment to prevent closure in 5 days."

github-actions[bot] commented 7 months ago

"This issue is being closed because it has been marked as stale for 5 days with no further activity."

Black3rror commented 5 months ago

Thank you @mansnils for answering. I've looked at the links, but my questions remain. Mainly, my questions are:

  1. I want to create source files, not a static library. How can I do that?\ Probably the answer is create_tflm_tree.py. If so, why this process is incomplete? It requires adding tensorflow/lite/array.h and tensorflow/lite/array.cc manually, also the include paths look a bit more complicated than what they should be (which is fine).

  2. What if I want to have default implementations of kernels (for example, when I have a specific MCU that is not included in TFLM's supported MCUs - Let's say Renesas RX)? What should be the values of TARGET and TARGET_ARCH?

rascani commented 5 months ago
  1. I want to create source files, not a static library. How can I do that? Probably the answer is create_tflm_tree.py. If so, why this process is incomplete? It requires adding tensorflow/lite/array.h and tensorflow/lite/array.cc manually, also the include paths look a bit more complicated than what they should be (which is fine).

array.h and array.cc are not actually needed by TFLM. The create_tflm_tree script invokes the TFLM make build to determine which sources are needed, and since those aren't, they are not copied. If you are running into compilation issues after that, it is likely that you're missing a TF_LITE_STATIC_MEMORY define.

  1. What if I want to have default implementations of kernels (for example, when I have a specific MCU that is not included in TFLM's supported MCUs - Let's say Renesas RX)? What should be the values of TARGET and TARGET_ARCH?

Omit the OPTIMIZED_KERNEL_DIR make flag and you'll get the default implementation of kernels.