Open mocleiri opened 1 year ago
The only documentation are the examples themselves or the microlite module code.
The parameters for the microlite.interpreter are:
The callbacks are called when the microlite.interpreter.invoke() method is called.
They can be stubbed with no-op functions of you want to do the data setup and data extraction in your main loop before and after invoking the interpreter.
I would suggest starting from the callback approach. Use netron to understand the shape of the input and output tensors of your custom model.
Then adjust each callbacks to work with the shape of your specific model.
Hey, just to confirm to, build an example like hello-world
from scratch for a custom network, is referring to the following files enough: openmv-libtf.cpp
and tensorflow-microlite.c
Are there any other files I may refer to for the custom network?
The purpose of the micropython firmware is that plugging in and experimenting with a new model can all be done from the micropython side.
Its true for additional context you can look at those C++ and C files but in general you just need to upload the model into the file system and then adjust the input callback to setup the input tensor and adjust the output callback to read out the inferenced result of that input.
I'm splitting this question from surajkumarpandey in #99 out into a separate issue.
Is there a documentation for the functions/objects used in the Hello_world example? I am trying to build a custom network that requires different configuration for running. For instance I wanted to know about the second parameter of "microlite.interpreter()" as to whose size is it and are callbacks optional for interpreter.