Origami-Cloudless-AI / TinyMLaaS-2023-winter

Run Hello World of TensorFlow Lite for micro automatically in Docker
https://Origami-TinyML.github.io/tflm_hello_world
Apache License 2.0
1 stars 2 forks source link

WebApp builds docker image to run TinyML (Compiling) #55

Open doyu opened 1 year ago

doyu commented 1 year ago

The parent issue #45 User stories https://miro.com/app/board/uXjVPwQdIjc=/

Acceptance test (RF)

  1. WebApp read a cvs file which holds a compiled TinyML binaries.
  2. WebApp shows a list of them.
  3. User chooses one of them.
  4. User chooses a target device.
  5. User pushs a "Build" button.
  6. WebApp generates a runnable Docker binary, which is supposed to predict human detection(?)
FexbYk23 commented 1 year ago

@doyu could you clarify where the docker container should be created and ran? Should docker be installed and used inside the webapp container or should the container be built on the host system outside of containers?

doyu commented 1 year ago

@FexbYk23 The docker image referred here is a device simulator which will predict human detection in a image and send its prediction back to WebApp. What I thought is, 2 docker instances run in docker-compose. One for WebApp and another for a device simulator above. Does this make sense here?

doyu commented 1 year ago

JFYI for listing, https://discord.com/channels/1062449235729068212/1062449236177854526/1076434355842457630

FexbYk23 commented 1 year ago

@FexbYk23 The docker image referred here is a device simulator which will predict human detection in a image and send its prediction back to WebApp. What I thought is, 2 docker instances run in docker-compose. One for WebApp and another for a device simulator above. Does this make sense here?

Yes, that would be the latter option in my question. In that case the thing I don't understand is how is the webapp supposed to build the other container.

My understanding was that the containers defined in the docker-compose file are only built before the app starts.

doyu commented 1 year ago

IIUC your question, docker build & run are totally different thing. Both run independently. Even you could pull from docker hub once built & registered. In this case once you docker-build a container image stored locally, and you could call & run in docker-compose locally without docker hub. So answering your question, docker-build could be done from Makefile, for example (or workflow). The rest is independent. Did I answer your question correctly here?

ArttuLe commented 1 year ago

I think the question is how do we build the image for the "edge device" inside the streamlit container to then run it alongside the streamlit container for prediction? @doyu https://stackoverflow.com/questions/54818485/create-docker-container-from-within-a-container Here it is said to be possible but not very good practice.

doyu commented 1 year ago

Ok, finally I got your question correctly;). IIUC, I ran docker-in-docker successfully, which would just spawn a new docker container instance running once you set correct permission with a socket file. https://shisho.dev/blog/posts/docker-in-docker/ This was somewhat different from what we do here. Here we need to build a docker-in-docker. It would be OK if a generated docker image stored in host storage, but if it's stored in container storage, that wouldn't work since it's isolated from host storage. Even in that case, you could register a new docker container image in docker hub, .https://hub.docker.com/. Then you would docker-pull later from anywhere. So, at first, you could try build docker-in-docker and see where a new docker image is generated. If it's in local docker storage, you would just need to register in docker hub before pulling. Theoretically "building docker-in-docker" would work in addition to "running docker-in-docker", but practically this needs to be verified at first ;)

ArttuLe commented 1 year ago

Should I just use some example code in the inferencing docker image or build a proper inferencing app with tflite micro? Would something like this be ok for the demo case? https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/raspberry_pi

doyu commented 1 year ago

My original thought for this item was that I considered docker_x86 as a simulated device, where it serves with a trained tflite model in it in the context of CI. With this, we could verify all CI/CD path without external dependency. So I suppose that you'd just use "human detection model" you trained & squeezed and put it instaled in docker_x86 (ubutnu).

In the final / real use case, I don't think that we should put docker in consideration beyond CI. Probably we'll have external IoT devices with ARM Cortex-M?, connected via Wifi directly or with a bridging device (e.g. RPI). This really depends on how it's configured, where probably we won't care docker / RPI at all but just direct setup in RPI would be enough? I don't think that we condier RPI as TinyML device since Linux runs on it.