mocleiri / tensorflow-micropython-examples

A custom micropython firmware integrating tensorflow lite for microcontrollers and ulab to implement the tensorflow micro examples.
MIT License
170 stars 79 forks source link

Question's about microlite.interpreter API #101

Open mocleiri opened 1 year ago

mocleiri commented 1 year ago

I'm splitting this question from surajkumarpandey in #99 out into a separate issue. Is there a documentation for the functions/objects used in the Hello_world example? I am trying to build a custom network that requires different configuration for running. For instance I wanted to know about the second parameter of "microlite.interpreter()" as to whose size is it and are callbacks optional for interpreter.

mocleiri commented 1 year ago

The only documentation are the examples themselves or the microlite module code.

The parameters for the microlite.interpreter are:

  1. Tensorflow lite for microcontrollers model loaded into byte array. The size of the array should match the file size of the model.
  2. Arena size. TFLM needs a memory area to run inference within. This number varies depending on the model being used.
  3. Callback function to setup input tensor.
  4. Callback function to extract data from output tensor.

The callbacks are called when the microlite.interpreter.invoke() method is called.

They can be stubbed with no-op functions of you want to do the data setup and data extraction in your main loop before and after invoking the interpreter.

I would suggest starting from the callback approach. Use netron to understand the shape of the input and output tensors of your custom model.

Then adjust each callbacks to work with the shape of your specific model.

tejalbarnwal commented 1 year ago

Hey, just to confirm to, build an example like hello-world from scratch for a custom network, is referring to the following files enough: openmv-libtf.cpp and tensorflow-microlite.c

Are there any other files I may refer to for the custom network?

mocleiri commented 1 year ago

The purpose of the micropython firmware is that plugging in and experimenting with a new model can all be done from the micropython side.

Its true for additional context you can look at those C++ and C files but in general you just need to upload the model into the file system and then adjust the input callback to setup the input tensor and adjust the output callback to read out the inferenced result of that input.