Open lifeofborna opened 1 year ago
When I looked into using streamlit for backend in the last sprint, it seemed that its backend is only meant to be used by the streamlit frontend and not by the developer. I'm unsure if there's even any documentation for the backend API.
Regarding st.file_uploader, I don't think it's possible to use it without a GUI. Even if you manage to somehow send a file to the backend API, streamlit will only store it in RAM. You would need a way to access that memory from another session, or a way to run "frontend" code on upload to store it as a file.
Probably we could postpone this item for now and we could survive with the current REST server, at least in the current Sprint 3.
Even if you manage to somehow send a file to the backend API, streamlit will only store it in RAM. You would need a way to access that memory from another session, or a way to run "frontend" code on upload to store it as a file.
For file storage, once it's uploaded on RAM with streamlit, a file could be stored in external storage (e.g. Google Drive, https://www.projectpro.io/recipes/upload-files-to-google-drive-using-python). Streamlit could be considered as python script with a web GUI. Even I'm wondering if Streamlit may have access to local storage if user grants technically (if implemented in Streamlit API), like granting access to local camera device via HTML5 API (st.camera_input).
When I looked into using streamlit for backend in the last sprint, it seemed that its backend is only meant to be used by the streamlit frontend and not by the developer. I'm unsure if there's even any documentation for the backend API.
For backend API, they seem to use protobuf between streamlit front <-> back ends, I may want to dig a little bit deeper into those interfaces, but probably it would be another time. Originally I picked up Streamlit for its easy less coding :)
So streamlit-webrtc might be the easiest out-of-box? This topic could be revisited in later Sprint.
Originally I suggested OpenAPI for a REST server to @FexbYk23 when I was asked. But now it doesn't seem to be a good idea to push forward. A REST API server is mainly to implement a backend logic so that it would allow a flexible (fancy?) fronend being implemented independently. This fronent / backend concept doesn't fit well with the Streamlit since Streamlit itself is a full stack webapp, esp. for PoC. Including a REST API server in Streamlit may become unnecessarily complicated architecture. Originally the REST server was proposed only for passing image files over network. I'm wondering that this may be able to be done via making use of exisitng st.file_uploader interface without GUI(?), https://docs.streamlit.io/library/api-reference/widgets/st.file_uploader, as explained here, https://discord.com/channels/1062449235729068212/1062449236177854526/1076883920248901743 So please investigate at first if we could send files via st.file_loader without GUI (e.g. curl). If this works we don't need TCP server but we could use send files to the existing st.file_uploader.
For lonter term development, esp. for low latency requirement, we may want to invesigate WebRTC, mentioned here, https://discord.com/channels/1062449235729068212/1062449236177854526/1076816520002408519 . Most of realtime object detection examples direcly uses CV interfaces. This needs direct access to a local cam device, which is not our case. Our IoT sensors (cam) is remotely located, where CV doesn't have any direct access but captured images have to be sent over network to the TinyMLaaS running on a Cloud VM. Streamlit-webrtc seems to take care of this quite well.
So in Sprint3 demo, we could survive with CV interface by demonstrating TinyMLaaS locally. We could consider to replace the current TCP server with something making use of st.file_loader without GUI possibly at first. Probably in Sprint 4 or 5, we could start streamlit-webrtc in TinyMLaaS with remote IoT camera devices. Those remote IoT camera devices don't need run webrtc on it but there should be a bridging device (laptop PC or RPI) between TinyMLaaS and IoT cam devices. That bridgeing device should send captured images to TinyMLaaS via webrtc.
One note: We may survive with st.file_uploader even without streamlit-webrtc if IoT cam sensesor sends image files seldomly (low latency is allowed).
Any comment would be really appreciated, commented by @doyu
https://github.com/Origami-TinyML/tflm_hello_world/pull/90#issuecomment-1436315560