Open JumpLink opened 1 month ago
Thanks for documenting this. As you say, it's a confusing setup!
It would be great to unify the development and production setup, I think I agree with most of the things you've said. Let's have a look at it together tomorrow.
One thing to note though: I don't think we can remove hive-deploy-stack
unfortunately. Portainer checks this repo out per stack and you can't tell it to only check out certain files, so if we changed it to point at monorepo
it would check out the entire repo for each client.
After much trial and error, I think I now know why I failed to test the docker image of the frontend locally.
It took me a long time to understand how the frontend docker image works. Here is a summary:
When the frontend is started locally without docker:
A local development server is started using
vite
Within this development server, a proxy is started which forwards all requests corresponding to the regular expression
^/(api|login|upload|uploads|favicon.png)
to the URLAPI_PROXY_URL
(which ishttp://localhost:3001
by default), this is the router of the backend docker compose stack which is accessible from outside via the IP3001
:vite.config.js:
The
API_PROXY_URL
ishttp://localhost:3001
by default.docker-compose.yml (for local dev):
MAIN_PORT
is3001
by default, the backend can be accessed locally via this port.The production stack has a different configuration for this, the production stack has another router which forwards the requests to the frontend via nginx:
The development stack does not do this and does not provide the new frontend via nginx (I have since changed this), but at least forwards the api requests to:
The
vite
configuration also replaces some placeholder strings, including__appUrl__
withhttp://localhost:3000
which is the local address of the vite webserver itself:vite.config.js:
Something similar is done by a script which is executed in the docker image and is applied to all built *.js files:
docker-entrypoint.sh:
These placeholders are used in the frontend in
env.ts
:This
env.appUrl
( which is thehost
) is not used in the frontend code, but theenv.appUrl
(which is the path/api/1.0
) to make requests to the backend API:This complex setup is used to be able to provide the frontend build via Nginx instead of a webserver like
vite
.When a new release is made, the
hive-deploy-stack
is checked out via a github action, a bash script is executed (this and this) in it which checks whether the versions of the backend match the version of the frontend. If this is the case, the new versions are published in the hive-deploy-stack. The portainer instances check this regularly and use the new version.Problems
env.appUrl
is not used for the requests, so I can set whatever I want here, it won't change the fact that the backend is not accessible locally since the same host is always assumed. The way the frontend is currently implemented, everything must be accessible via the same domain, which is the domain via which the frontend is accessible. This is worked around in the production stack by an Nginx configuration, just as it is the case on the devlopment stack with the proxy invite
.All API requests use the frontend host itself, which also means that no request errors occur, everything returns 200 even though the backend is not accessible:
If I want to work on the frontend code to fix this problem, I can mount my local frontend build into the docker image, but then the necessary
placeholders
will not be replaced. So I have to build the container from scratch every time:The router used in the production stack has its own repository and I had not yet dealt with it, so it was completely off my radar. The development environment uses the
vite proxy
for that which is included in the repository, but not the production router with his nginx config for that.Checking out the
hive-deploy-stack
, executing a script in it and then making automatic commits in it also increases the complexity and is not documented anywhere.The release scripts are bash scripts, which means that another language is used.
Proposed solution
Due to this complex setup, the problems described and the differences between the development stack and the production, it was very difficult to identify the problem here. New developers are likely to have the same problems, which makes it very difficult to get started. I also asked @wpf500 how I could test the frontend docker image locally and he didn't seem to have that in mind either. Instead, I would rethink the entire setup; I don't think the development stack should be too different from the production stack.
The routers for development and for productive hosting should be merged and usable locally and productively via environment variables.
The
hive-deploy-stack
should also not differ too much from the development environment and could instead be added to the monorepo and also be used for local development.I would suggest the following steps:
hive-deploy-stack
repo will be part of the monorepobeabee-router
repo will be part of the monoreponginx.conf
can and should be merged and simplifiedvite
and the docker container using the same.env
file for the string replacement, sovite
would exchange the placeholders with the same values. This would allow the local build files to be loaded into the docker image during the development process.vite
and no longer necessary, as the backend can also be accessed via a different hosthive-deploy-stack
repo and committing something in it, we should summarise the release process in a JavaScript or TypeScript script. This would also eliminate another language (bash). This could still be automated using via github actions (like now) and/or with tools such as release-it.With
release-it
we could:release-it
is that it is platform-independent, also works with GitLab and everything can be managed centrally rather than being scattered.release-it
itself could also be triggered by a GitHub ActionTemporary solution
I have now adapted the
docker-compose.yml
andnginx.conf
so that I can at least test the frontend docker image locally. I have checked the whole thing into the repo so that we can continue to test locally, even though this is not a clean, permanent solution and the image needs to rebuild each time when new changes to the frontend should be tested.