Closed manulera closed 4 months ago
Hello Manu.
I think you're taking the wrong approach here. Adding a reverse proxy in front of your services doesn't make it a single service, it makes it a three parts service.
On of this part is traeffik which I personally do not want to use because it requires mounting the docker socket, which is widely regarded as a dangerous move (who has access to the docker socket can basically be root on the host). (see https://github.com/traefik/traefik/issues/4174)
Reverse proxy configuration must not be the concern of an app. You should ideally expose a single http endpoint: the frontend, and possibly have a second container on another network (a network internal to syc) so they can communicate.
Ideally, you only have one container exposing one http service on one port. Did you explore simply adding the react part to the python part (or the other way around). Maybe there is a threshold to cross, after which this change will very much simplify a lot of aspects. Maybe not, you know your app best, you tell me.
Here you're adding another big dependency to your project, traeffik, which is definitely something that doesn't belong in the distribution of a webapp. The command argument insecure=true
should make you tick ;)
What we want is a container running as non-root user, on unprivileged port, which is what you have already. It's just that you have two, and that is not great, not terrible.
The CORS issues can be worked around, it's just a matter of configuring the server response headers on an OPTIONS
http request.
ughh I guess you are right... A shame because it took me a while to figure out the traeffik thing, but it makes sense. I will probably just serve the frontend from the backend, that's probably the easiest.
Given that you have nginx already running on the frontend, you can probably use it as a reverse proxy for a special endpoint (/backend
) that will redirect to the other container. This way the frontend doesn't need to know about the backend, it just calls the /backend
URL on the same address than the frontend.
I changed it so that the frontend can be served from the fastapi endpoint. Maybe this was a bit an act of stubbornness just because I had set myself to make it work from a single url.
On the other hand, now you can specify the backend url when you run the frontend container instead of when you build the image, and moving forward other config should be possible to be set similarly, and not rely on env variables.
I will update how the CORS are handled soon and document the change.
It's working when I build it in a codespace now, I'm also wondering if this would fix the firewall issue I was having with the hosted (shareyourcloning.org) version as well, the issue previously was just with https://shareyourcloning.api.genestorian.org.
when you build the image,[...], and not rely on env variables.
Configuration should only happen with env vars unless it's really not possible and you have to rely on building the image yourself, but that's really not something anyone wants to do, unless they are trying specific things. You must not require the end users to actually clone your repos and build the images themselves, it complicates deployments for no good reasons.
For instance, for elabftw/elabimg, you can use a build_arg to set the tag or branch of elabftw that will be imported. But end users are never expected to do so, they just use the tagged public image, and all configuration is done at runtime through env.
I'd recommend having a docker-entrypoint.sh
as ENTRYPOINT
in your image. This script will pick up env var and do some config adjustments before starting the real service.
See https://www.12factor.net/config for reference.
And https://github.com/elabftw/elabimg/blob/master/src/init/prepare.sh for an example of such script. Or https://github.com/docker-library/mysql/blob/master/8.0/docker-entrypoint.sh for another example present in a major service.
Hi @NicolasCARPi this is what I have ended up doing (see docker-compose file). Let me know if you think it makes sense.
You can run frontend and backend on different containers, in that case you have to configure the backend via the env variable ALLOWED_ORIGINS
(comma-separated allowed origins via CORS).
https://github.com/manulera/ShareYourCloning_backend?tab=readme-ov-file#connecting-to-the-frontend
For the frontend, you can set the backend url to which request will be made via a config file config.json
(and I guess further configuration in the future).
https://github.com/manulera/ShareYourCloning_frontend?tab=readme-ov-file#configuration
The default configuration is enough to run the site from one container only
Backend:
ALLOWED_ORIGINS
env var is now empty (only requests from the root url are allowed, but you can still set more via ALLOWED_ORIGINS
). SERVE_FRONTEND
env var is 1
, which makes the container serve the frontend from the /
url instead of the usual welcome to the backend API message: https://shareyourcloning.api.genestorian.org/Frontend:
backendUrl
set to /
so requests are made to the same url.Well, if you managed to get everything running into one container, just drop the possibility to run it as two container. There is no need for the added complexity and maintenance for this. The next step is to make the config.json disappear in favor of env vars. Especially given that it seems to only contain one key with one value. Just simplify it, drop the need for a config.json, drop the possibility to use separate containers. You'll thank yourself later to keep it simple (https://www.interaction-design.org/literature/topics/keep-it-simple-stupid).
Honestly, this is much better than before, you went from having 3 containers with an exposed docker socket, to just one container with a very clean dockerfile :clap:
Everything served from the same url, can be more convenient and does not have CORS issues.
cc @NicolasCARPi @JamesBagley
Still not completely finished, but should be soon.