Open mbz4 opened 7 months ago
From HarvardX course: TinyML Arduino Lib
It may also be possible to extend support for the TF Federated Learning framework but recommend focusing on TF Lite Micro.
Would probably also be good to focus on on particular example like the gesture recognition of wand - however as this might be an input device, using ESPNow to reduce latency of triggering the action could become a pre-dependency then. However, focusing on a specific example will help us seeing potential problems like the one described early on.
Would probably also be good to focus on on particular example like the gesture recognition of wand - however as this might be an input device, using ESPNow to reduce latency of triggering the action could become a pre-dependency then. However, focusing on a specific example will help us seeing potential problems like the one described early on.
Yes, here are ready made examples, hello-world highlighted: https://github.com/tinyMLx/arduino-library/tree/main/examples/hello_world This example predicts a sine wave for builtin led breathing.
Even more resources:
Doodled potential (tangible) objectives:
Basically, the entire MLOps could fit here.
Node-RED out of the box supports Tensorflow.js, just need a few utilities installed first.
Google Cloud Vision API | LABEL_DETECTION on a Raspberry Pi with a Webcam ...example integrating online service:
node-red-contrib-tensorflow ... works with pretrained models (local inference):
With TF Lite Micro meanwhile, we can deploy models for inferencing on a ESP8266 (instead of online services or local gateway inferencing). Hence, we can pipe data to the node (ESP) and output data from it (ie, labels if running classification).
To deploy a model to a node we need to have the model binary available - the model should be trained first and optimised for microcontroller inferencing.
focus on one example like the gesture recognition of wand
makes sense to test-run TFLM one example at a time - the m5stickc would be an ideal candidate:
Potentially, without ESP-Now even, can try piping outputs thru to dashboard to a chart... or servo actuation...? ideas welcome
This seems more focused on the M5StickC - but seems to be TinyML, not TFLite: https://github.com/kjwu/M5StickCPlus_TFLite_Gesture
I think actually, we should measure the IMU output on the stick and then compute with Node-RED. Maybe, we should first try to implement gesture recognition this way:
P. Asteriou, J. Diephuis and P. Wintersberger, "MagicMoves: A Gesture Creation Framework for Virtual Reality Applications," 2023 International Conference on Intelligent Metaverse Technologies & Applications (iMETA), Tartu, Estonia, 2023, pp. 1-6, doi: 10.1109/iMETA59369.2023.10294473. keywords: {Technical requirements;Gesture recognition;Machine learning;Virtual reality;User interfaces;Virtual Reality;Gesture Interaction;Motion Capture},
TF Lite Micro (link - supported platforms) makes local node ML inferencing possible, enabling powerful example applications like:
… and other cool demos. Inferencing locally on a node is less energy demanding than transmitting and inferencing on a cloud service.
TF Lite Micro (git) works by including several dependencies on a supported node (esp32 devkit for example) and piping data for inferencing, which can be piped back over the network.
The model loaded to the device is a byte array to
model.cpp
, which in turn is retrieved from a training output where it can also be pre-tested using the TFLite Interpreter, for example, in a cloud service like Edge Impulse, your own Jupyter environment or a Google Colab.