Open ezio-melotti opened 2 years ago
@ezio-melotti Looks good! So we want the live backend to plug in to the docker-compose in the same way the frontend plugs into it. Essentially the "docker-compose" container will be responsible for managing the network, database, and sim version of simoc. That means we'll need to spin up three different containers, simoc-web, simoc-sam, and simoc for live mode?
We also have at least two different scenarios we want to cover:
- Regular users using the live mode from ngs.simoc.space;
- People living inside SAM and/or mission control
I was wondering maybe this would be three different scenarios:
Each would have access to different parts of simoc, have different privileges, and access sam differently. Or maybe the pairing should be:
We might also want to support a standalone installation. In that case we could reuse the same socketio container and use docker-compose to bundle it with e.g. the nginx and MySQL containers...
If you do want a standalone installation maybe we should work on building this first. It'll be easier to test everything on this setup. Once this is built, maybe we can find a way to plug it in to the existing backend and use this container as the socketio backend container in your first diagram.
That means we'll need to spin up three different containers, simoc-web, simoc-sam, and simoc for live mode?
The container of the simoc-web repo is required only if you are developing on the frontend. In that case you will need to start the regular backend, the socketio container, and the simoc-web container. If you are just working on the backend and you are not using the dev frontend for testing (e.g. if you are using a standalone script), then you only need the backend and socketio containers.
I was wondering maybe this would be three different scenarios:
They could be three, depending on how we handle mission control. I'm assuming both mission control and the people inside SAM have access to the same data -- the only difference will be in where they are accessing the data from.
If you do want a standalone installation maybe we should work on building this first. It'll be easier to test everything on this setup.
I think a good sequence would be:
I'm doing some testing following this approach.
The container of the simoc-web repo is required only if you are developing on the frontend. In that case you will need to start the regular backend, the socketio container, and the simoc-web container. If you are just working on the backend and you are not using the dev frontend for testing (e.g. if you are using a standalone script), then you only need the backend and socketio containers.
That's right the app is served on 8080 and 8000. So the socketio container would be served on its own port, but would require 8000 to be up like the frontend does during development.
They could be three, depending on how we handle mission control. I'm assuming both mission control and the people inside SAM have access to the same data -- the only difference will be in where they are accessing the data from.
To me MC seems to be more in the middle. They can access sam remotely but can view all the data the inhabitants can view unlike regular users. Maybe Kai can clear up some of the ambiguity.
I think a good sequence would be:
- standalone socketio server, no containers
- add a standalone socketio client for testing, still no containers
- put the socketio server in a container, test it with the socketio client
- add the socketio server to the same docker network of the regular backend
- connect to the socketio server through the dev frontend
For number 2, is the client going to be the frontend (8080), so we'll be connecting the socketio server directly to the frontend? Then it looks like by number 5 we'll be indirectly connecting the socketio container (<Port?>) and the frontend (8080) through the backend (8000).
The socketio server will listen on some port (e.g. 5000), and will accept connection from whoever tries to connect. Once the socketio server is inside the container, that port will need to be mapped so that is accessible from outside the container.
For 5. above, I think the frontend will still connect directly to the socketio server/container, so there shouldn't be any "connection" between the socketio server and the backend, except for the fact that all the containers need to be in the same docker network.
I wrote a sample server and two sample clients (in JS and Python) and pushed them to the repo. I also added a Dockerfile
and a README.md
with some instructions about building and running the container and the clients/server.
I was able to run the server both inside and outside the container, and to use the standalone Python client from outside the container to connect to server.
This means that by specifying the correct address/port from the frontend code (and possibly by adding the container to the Docker network), you should be able to connect to the socketio server.
These are some changes that we should implement next:
Sensor
base class (#6)main
" that imports and runs the sensorsutils
module with some shared utils (e.g. to process/convert readings)
This issue discusses proposes a possible architecture for the Docker containers setup and communication between the frontend and backend(s) in different scenarios.
The simplest architecture is to add a new standalone socketio container that can be plugged to the existing backend, in the same way the dev frontend does. Both the dev frontend and the socketio backend have their own repo, their own container, and just connect to the current backend without using docker-compose:
Ideally this new socketio container would be standalone, meaning that it will launch a socketio server that will wait for connections without having to rely on the other containers to functions (similarly to how you can start the dev frontend without the other containers). The actual clients connecting to it could then be the regular frontend (which lives in the
Flask
container), the dev frontend (which has its own standalone container), a standalone Python script, a CLI tool, etc.:(Both the dev frontend and a standalone Python script could connect to the socketio container and request the data. The socketio container won't know nor discriminate and send data to whoever requests them.)
We also have at least two different scenarios we want to cover:
I think that for both these scenarios, we can spin up both backends, with the regular backend serving the frontend, dealing with user login, and possibly with simulations/prediction, and the sam backend sending live data whenever the user selects the live mode from the main menu.
So the sequence diagram for a user opening the frontend, logging in, selecting the live mode, and disconnecting will look like:
(Note: the diagram is not meant to be an accurate representation of all the steps, but just outline the general idea. When the user selects the live mode, the frontend will connect to the sam backend via socketio. After the frontend requests sensor data, the sam backend will keep pushing data until to the frontend until it disconnects.)
We might also want to support a standalone installation. In that case we could reuse the same socketio container and use docker-compose to bundle it with e.g. the nginx and MySQL containers:
This could be accessed directly, or if the SIMOC frontend is needed it could be added too and served statically (with the login and simulation mode disabled, since those require the regular backend). The sequence diagram will then look like:
This architecture doesn't discuss how the socketio container obtains the sensor data. It could read them from a DB or Redis (that could be in the same container, in a new separate container, or in the same MySQL DB/Redis used by the regular backend). The process that actually reads data from the sensor might also be part of the same container, or a separate container/repo, or even standalone scripts that don't use Docker.