Open e-caste opened 4 years ago
Useful docs:
So, at this point we have:
What's missing? Why, the communications! In particular, two steps:
With respect to the first point, what I've found looking at docker-py's Container docs, Xtermjs's attach docs, FastAPI's Websockets docs, this Xtermjs issue, python-socketio's AsyncServer docs, and this FastAPI issue is that:
Container.attach
and Container.attach_socket
for docker-py.attach_socket
docker-py method with ws=True
. This unexpectedly throws an exception (http+docker protocol is invalid
). May be fixed with this command, but I wouldn't really rely on an external tool for a production server with other Docker containers. So the obvious choice was tows=False
. This returns a socket.SocketIO
object. I've then read for a few hours about the SocketIO protocol. It is basically a wrapper around the Websocket protocol with some added features and many client and server side libraries. Reading the FastAPI and Xtermjs issues linked above, it just may work for this use case.I have yet to implement the whole Docker<-->user communication layer, so that will be fun. The two viable options at this point are:
websocket
library. By just commenting out the line that raises a ValueError
for the scheme, I can see that I then get an OSError: [Errno 49] Can't assign requested address
.Let's see...
Small update: reading up the code of socket.SocketIO
, I've discovered that it has some interesting methods like readinto(self, b) # Read up to len(b) bytes into the writable buffer
and write(self, b) # Write the given bytes or bytearray object *b* to the socket
.
These are symmetric to the ones we use between the backend and the frontend!
So the "simple" solution is to make a "proxy" SocketIO handler in the API server, whose behaviour goes like:
POST /api/container
After a careful analysis, a full server rewrite (Python/FastAPI->nodejs/Express) is needed for the following reasons:
express-session
dockerode
library to manage containers (similar to docker-py
, but with added streams capabilities)I could keep the REST API server written with FastAPI, but it has the following disadvantages:
fastapi_users
) with many capabilities that I currently don't need (email+password+isAdmin+MongoDB+etc...)This approach also aims to reduce the amount of needed containers, from 4 (REST server + socket.io server + MongoDB + nginx for the frontend) to 1 (Express with socket.io attached, statically serving frontend files).
For this we'll need:
Server
Docker exposes some APIs which can be exploited with the official Go or Python SDKs. I will choose the latter since I'm more familiar with it.
Here are some usage examples of the
pip
docker
package.We will also need a REST API server exposed to the web. For this we can use any API framework listed in this webpage, but since this won't likely be a huge project a micro-framework such as Flask or FastAPI would be better than a "heavy" framework like Django.
Client
The front-end framework needs to be compatible with ReactJS. A combination of
react-bash
andreact-iframe
could be used, like in this similar project.Proxmox allows to connect to a VM/Container through noVNC, xterm.js, and SPICE, and it's written in Javascript, so those are valid options in case
react-bash
turns out to not fit our needs.Authentication
This website should be as open as possible, so I won't implement any authentication (for the moment). It may be needed in the future, since our hardware resources are limited.
Possible solutions to over-usage:
How it should work
react-bash
, the client sends it to the API server with the hashless file.txt
should scroll correctly.Home-made vs Katacoda
Katacoda allows to implement a markdown step-by-step guide for each scenario, which can include clickable terminal commands. This would have to be re-implemented with a markdown viewer, such as
react-markdown
. The commands would have to be dropped, copy-pasted, or a new method would have to implemented to detect them from the markdown text.Katacoda allows to connect to a different host port, such as 80, to see e.g. a webserver hosted in the Docker container. This would have to be implemented through the APIs, maybe using a reverse proxy for all containers of this project.
And now the pro: instead of having to sit through a scripted package installation every time you want to run a repository, a Docker image can be pulled with all software already installed.
This would require a Docker Hub account and a GitHub Action for Continuous Deployment at every push to the master branch of the related repositories.