Origami-Cloudless-AI / TinyMLaaS-2023-winter

Run Hello World of TensorFlow Lite for micro automatically in Docker
https://Origami-TinyML.github.io/tflm_hello_world
Apache License 2.0
1 stars 2 forks source link

WebApp compiles to a TinyML model #38

Open doyu opened 1 year ago

doyu commented 1 year ago

Image

Steps

  1. You'll convert a model in Jupyter notebook, named "compiling.ipynb", as instructed in TensorFlow lite for micro site
  2. "compiling.ipynb" should have some tests and documents, which would be processed by nbdev_* commands, https://nbdev.fast.ai/getting_started.html
  3. nbdev will convert "compiling.ipynb" to "compiling.py" automatically.
  4. You'll import "compiling.py" in "tflm_hello_world/pages/5_Compiling.py" as a library
  5. You'll implement GUI for this compiling in Streamlit toolkit - You'll list, search, choose a trained model in a csv file via Pandas dataframe, implemented a simple streamlit UI.
    • Trigger compiling
    • Show the result of TinyML

Hello is just a dummy use case. You won't need any fancy UI for now. This is just a preparation for the coming real use case.

This is the destination page, https://doyu-tflm-hello-world-tinymlaas-yaz40u.streamlit.app/Compiling

This may be similar to https://github.com/doyu/uoh-software-project-time-report/

ArttuLe commented 1 year ago

Is the goal for this task to convert a normal Tensorflow model into TFLite and/or a C source for TFLite micro? Didn't really understand what is meant by "compiling" in this context :)

doyu commented 1 year ago

I think that this is same as https://github.com/Origami-TinyML/tflm_hello_world/issues/54 Just generating a flatbuf model binary. The rest of process should be done by https://github.com/Origami-TinyML/tflm_hello_world/issues/55 since the application logic and a model should be independent. You should work on human detection of this now without hello now. Did I answer your question here?

ArttuLe commented 1 year ago

Yes I am working on the human detection now. And thanks for the clarification :)

ArttuLe commented 1 year ago

How should we go about implementing this with the Webapp, since the conversion to .cc model is done via command line ? https://www.tensorflow.org/lite/microcontrollers/build_convert#convert_to_a_c_array

doyu commented 1 year ago

Instead of dumping with xxd, can't just c-program read a binary file of flatbuff tflite directly?

Alternatively, you could use a linker file to embed a binary in it too.

ArttuLe commented 1 year ago

Just got the xxd to work :) Should we still keep the xxd or skip it and use the .tflite files directly.

doyu commented 1 year ago

xxd is OK for now ;)

ArttuLe commented 1 year ago

Should I make a new PR for this or push it to #67 , since this is just one additional method to the existing model training code?

doyu commented 1 year ago

Let's leave them separated.