Closed joe-eklund closed 1 year ago
Looking at the current config implementation I think it could be improved.
Now to the issue... The configuration is generated on every startup of the proxy container. This was done to support running multiple LP stacks on one machine. It is useful mainly for development where running a regular version of the stack side by side with the instance of the stack dedicated to e2e testing.
To solve your issue you could:
BACKEND_NAME
and FRONTEND_NAME
either in the docker-compose.yml
proxy service or in your shellproxy/nginx.tpl
and map that file in docker-compose.yml instead of the file located in proxy/nginx.tpl
Those both seem like reasonable solutions, I will give it a shot this weekend and let you know how it goes. Thanks!
I don't know if it is strictly related but I have a similar problem with the proxy container on Kubernetes.
The issue is that, with the exact configuration from the repo, there is a permission issue when using envsubst to generate the nginx.conf. As a result, the nginx configuration is the default "Welcome to nginx". From the proxy container log, I can see the "sh: 1 permission denied..." and going to the container the nginx.conf is the default one.
As a workaround, I've generated the nginx.conf with envsubst, saved in a config-map, and mounted as a volume. Maybe something, at a permission level, changed with the official nginx image? As I see from the docker file there is no version specified in the "FROM nginx", so that could happen.
I wonder what the rest of the log shows for “permission denied”.
I'm sorry, I couldn't check the old log and I tried to recall what it was... I was wrong about the message. I've restarted the pod and here it is the full log.
/bin/sh: 1: cannot create /etc/nginx/nginx.conf: Read-only file system
2023/02/11 20:01:53 [warn] 6#6: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1
2023/02/11 20:01:53 [notice] 6#6: using the "epoll" event method
2023/02/11 20:01:53 [notice] 6#6: nginx/1.23.3
2023/02/11 20:01:53 [notice] 6#6: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2023/02/11 20:01:53 [notice] 6#6: OS: Linux 5.10.0-21-amd64
2023/02/11 20:01:53 [notice] 6#6: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/02/11 20:01:53 [notice] 6#6: start worker processes
2023/02/11 20:01:53 [notice] 6#6: start worker process 7
Could it be that you are attaching config-map as a folder/file into /etc/nginx/nginx.conf ? This would explain read-only file system error. The nginx template change is my doing, and I haven't tried to run LP in K8s. Will give it a go when I'll have time.
Could it be that you are attaching config-map as a folder/file into /etc/nginx/nginx.conf ? This would explain read-only file system error. The nginx template change is my doing, and I haven't tried to run LP in K8s. Will give it a go when I'll have time.
My first try was with the k8s conf in the repo, which is without a config-map (that's my workaround). The issue seems to be that, the proxy docker image, has an entrypoint that creates the nginx.conf on the fly, but the system is readonly so it fails.
Tested and I see what the problem is… Until the problem is solved, the quick solution is to disable security features for proxy container. These lines should be removed/disabled:
No need to disable security features. Linked PR should fix the issue in k8s as we no longer need to generate nginx.conf on the fly.
To be clear, the fix for my original issue was to no longer mount the nginx conf file and instead set BACKEND_NAME
and FRONTEND_NAME
environment variables in my docker-compose file in my proxy container.
This issue has cropped up again, @sickelap. My fix was working until today, looks like there was a recent release that broke my fix. Like I stated above, I had been setting the BACKEND_NAME
and FRONT_END
name environment variables for my proxy container and it was working.
Then I pulled the latest image and starting getting these errors:
10-listen-on-ipv6-by-default.sh: info: IPv6 listen already enabled
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/02/26 06:13:26 [emerg] 1#1: host not found in upstream "frontend" in /etc/nginx/nginx.conf:20
nginx: [emerg] host not found in upstream "frontend" in /etc/nginx/nginx.conf:20
In the meantime, I rolled back to tag 2022w50
for the proxy container and the environment variable fix works.
After the merge of #88 environment names FRONTEND_NAME
and BACKEND_NAME
won't be taken into account.
I would like to know what is your setup to be able to reproduce the issue. Can you provide more details, please?
Sure. I have a deployment compose file I deploy and manage through Portainer, this is why I want to have different names for my containers other than frontend
, etc., since I have many other containers running on the same machine. Traefik serves as a reverse proxy to all my user facing services. In the case of Librephotos, that is the proxy container. Here is my compose file:
version: '3.7'
services:
librephotos-proxy:
#image: reallibrephotos/librephotos-proxy:latest
image: reallibrephotos/librephotos-proxy:2022w50
container_name: librephotos-proxy
restart: unless-stopped
environment:
- TZ=America/Los_Angeles
- BACKEND_NAME=librephotos-backend
- FRONTEND_NAME=librephotos-frontend
volumes:
- /mnt/pool/librephotos/data:/data
- /mnt/pool/librephotos/protected_media:/protected_media
#- /home/me/librephotos/proxy/nginx.conf:/etc/nginx/nginx.conf
depends_on:
- librephotos-backend
- librephotos-frontend
networks:
- traefik_v2_proxy
- librephotos_internal
labels:
- traefik.enable=true
- traefik.docker.network=traefik_v2_proxy
- traefik.http.routers.librephotos.entrypoints=https
- traefik.http.routers.librephotos.rule=Host(`librephotos.domain.tld`)
- traefik.http.routers.librephotos.tls=true
- traefik.http.routers.librephotos.tls.certResolver=default
- traefik.http.services.librephotos-service.loadbalancer.server.port=80
- traefik.http.middlewares.librephotos-auth.forwardauth.address=http://organizr_v2/api/v2/auth?group=1
- "traefik.http.routers.librephotos.middlewares=librephotos-auth, hsts"
healthcheck:
test: curl --fail http://localhost || exit 1
interval: 60s
retries: 3
start_period: 20s
timeout: 15s
librephotos-db:
image: postgres:13
container_name: librephotos-db
restart: unless-stopped
environment:
- POSTGRES_USER=someuser
- POSTGRES_PASSWORD=somepassword
- POSTGRES_DB=somedb
- TZ=America/Los_Angeles
volumes:
- /home/me/librephotos/db:/var/lib/postgresql/data
command: postgres -c fsync=off -c synchronous_commit=off -c full_page_writes=off -c random_page_cost=1.0
networks:
- librephotos_internal
labels:
- traefik.enable=false
healthcheck:
test: psql -U docker -d librephotos -c "SELECT 1;"
interval: 60s
retries: 3
start_period: 20s
timeout: 15s
librephotos-frontend:
image: reallibrephotos/librephotos-frontend:latest
container_name: librephotos-frontend
restart: unless-stopped
environment:
- TZ=America/Los_Angeles
depends_on:
- librephotos-backend
networks:
- librephotos_internal
labels:
- traefik.enable=false
librephotos-backend:
image: reallibrephotos/librephotos:latest
container_name: librephotos-backend
restart: unless-stopped
volumes:
- /mnt/pool/librephotos/data:/data
- /mnt/pool/librephotos/protected_media:/protected_media
- /mnt/pool/librephotos/logs:/logs
- /home/me/librephotos/cache:/root/.cache
environment:
- SECRET_KEY=somesecret
- BACKEND_HOST=librephotos-backend
- ADMIN_EMAIL=me@me.com
- ADMIN_USERNAME=me
- ADMIN_PASSWORD=somepassword
- DB_BACKEND=postgresql
- DB_NAME=somedb
- DB_USER=someuser
- DB_PASS=somepassword
- DB_HOST=librephotos-db
- DB_PORT=5432
- REDIS_HOST=librephotos-redis
- REDIS_PORT=6379
#- MAPBOX_API_KEY=
- TIME_ZONE=America/Los_Angeles
- WEB_CONCURRENCY=2
#- SKIP_PATTERNS=${skipPatterns}
- ALLOW_UPLOAD=1
- DEBUG=0
- HEAVYWEIGHT_PROCESS=2
depends_on:
librephotos-db:
condition: service_healthy
networks:
- librephotos_internal
labels:
- traefik.enable=false
healthcheck:
test: curl -u $$ADMIN_USERNAME:$$ADMIN_PASSWORD http://localhost:8001 || exit 1
interval: 60s
retries: 3
start_period: 20s
timeout: 15s
librephotos-redis:
image: redis:6
container_name: librephotos-redis
restart: unless-stopped
networks:
- librephotos_internal
labels:
- traefik.enable=false
healthcheck:
test: redis-cli ping | grep PONG
interval: 60s
retries: 3
start_period: 20s
timeout: 15s
networks:
traefik_v2_proxy:
external: true
librephotos_internal:
Note the above compose file should work since it is locked to 2022w50
for the proxy (thought I have sanitized any passwords or actual domains). If I swap to latest it breaks because it can't find frontend
(my container is called librephotos-frontend
).
If you have any other specific questions or need more info please let me know.
If I understand correctly, when you are switching to latest proxy AND uncomment mapping for nginx.conf
it does not work? Did you rename frontend to librephotos-frontend in your version of nginx.conf
?
If I understand correctly, when you are switching to latest proxy AND uncomment mapping for
nginx.conf
it does not work? Did you rename frontend to librephotos-frontend in your version ofnginx.conf
?
That's right. I can no longer map that file because it gets overwritten on startup now, that's why I started using the env variables. Is there another way I can set the container names?
Hey @sickelap, any progress on this issue? Thanks!
I have verified that I can now again pass in my custom nginx.conf
file and everything works again. Thank you!
Greetings!
I have run into an issue when I recently attempted to upgrade my docker compose stack. I started getting errors where the proxy container couldn't find "frontend". This immediately seemed like an issue with the renaming of container names (which is required for me).
I had accomplished this previously by mounting the nginx conf the proxy container used file like
-/home/me/librephotos/proxy/nginx.conf:/etc/nginx/nginx.conf
and editing the names in there.But it seems there was an update last month that changed the Dockerfile to overwrite this conf file on startup every time. Which is the exact behavior I am seeing. When I go edit that file on my host and spin up my stack, my changes get overwitten.
I am not sure if this is related to #80, maybe?
For now I have rolled back to a release in November, which has allowed me to spin my stack back up.
Any thoughts on how I can fix my deployment? Seems my current way of saving that nginx file to fix my named containers is incompatible with the way this project has moved forward.