Closed merlinscholz closed 2 weeks ago
I suspect this is an issue with Docker on Synology and not caused by the Nextcloud image.
I was mainly wondering if terminating on SIGWINCH is a function that should be enabled in the Nextcloud image, since I do not see any reason in which a SIGWINCH should cause Nextcloud to stop. Apache2 mainly implemented this feature to allow graceful shutdowns (https://bz.apache.org/bugzilla/show_bug.cgi?id=50669). Other projects have the same issue and worked around it: https://github.com/GrahamDumpleton/mod_wsgi/issues/105#issuecomment-359783948
[mpm_prefork:notice] [pid 1] AH00170: caught SIGWINCH, shutting down gracefully
Also getting a SIGWINCH
error running the apache container
Also appears to be an issue when running within kubernetes if the system is using containerd. And apache uses SIGWINCH as a graceful termination signal (https://httpd.apache.org/docs/2.4/en/stopping.html#gracefulstop)
Do we have a workaround for this? We see this behaviour with k8s and containerd too. This is kinda a show stopper :(
Maybe https://github.com/docker-library/php/issues/64#issuecomment-71124775= has a workaround for you. We consume apache from https://github.com/docker-library/php.
Same behaviour here k8s and containerd. showstopper.
Also having this issue with K8s and using the helm chart
the same here, you need to use nextcloud:fpm (with fpm tag) + nginx enabled
still no fix ?
Also broken here on k8s + crio.
I am also facing that issue.
Just wanted to comment here. I was running into this problem as well. I fixed this problem after properly setting my config.php
files. When using the default docker image, the default configurations that come with don't quite work I guess, so I had to (with docker: use -v
to volume mount additional configuration files into the right spot), or, with the kubernetes helm chart, I had to set the proper config settings in either nextcloud.configs
or nextcloud.defaultConfigs
, or nextcloud.phpConfigs
. Once I had the right configuration files in place it started working.
I was able to work-around this with nextcloud on k3s without rebuilding the offical docker image by mounting a trap wrapper script that traps the SIGWINCH signal, thereby not passing it to apache.
nextcloud deployment.yml:
[...]
spec:
volumes:
- name: nextcloud-trap
configMap:
name: nextcloud-trap
defaultMode: 0777 # < to ensure script can be executed
containers:
- image: nextcloud:26
name: nextcloud
command: ["/bin/bash", "-c"]
args:
- /nextcloud_trap.sh
[...]
volumeMounts:
- name: nextcloud-trap
mountPath: /nextcloud_trap.sh
subPath: nextcloud_trap.sh
nextcloud trap configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: nextcloud-trap
namespace: default
data:
nextcloud_trap.sh: |
trap "" SIGWINCH
/entrypoint.sh apache2-foreground
Workaround to prevent SIGWINCH from reaching the Apache process:
setsid COMMAND
Replace "COMMAND" with the actual command.
Workaround keeping the process in foreground and forwarding SIGINT (ctrl-c):
setsid --wait COMMAND &
child_pid=$!
trap "kill -SIGINT $child_pid" SIGINT
wait $child_pid
For "setsig" see also: https://en.wikipedia.org/wiki/Process_group
Definitely there's an issue with the image/helm configuration.
This one worked for me:
flavor: apache
version: 4.5.12
This issue has several different pathways to SIGWINCH/WINCH related matters. Not everyone's situation is the same so the solutions are going to be a bit different.
Start-up errors:
When opening a shell into a container:
I'm going to close this since there are different matters ending up here. And there isn't a clear bug in the image itself.
If you believe strongly you're experiencing a bug in the image itself, please create a dedicated Issue with precise reproduction steps.
Hi, when using the docker:stable container (and all other non-fpm labels), viewing the Docker logs on some systems (for example on Synology DiskStations) causes a SIGWINCH to be send to the Apache2 process. This causes it to shutdown gracefully. If auto-restarting is enabled, this process repeats as long as the according log window is left open.
A workaround is to only use docker command over ssh on those systems.