Closed tklovett closed 4 years ago
This may be why @KongZ created this custom pre-stop hook for the pods in kubes:
lifecycle:
preStop:
exec:
command:
- bash
- -ec
- |
ROOT_PASSWORD=`/k8s/kubectl get secret graylog -o "jsonpath={.data['graylog-password-secret']}" | base64 -d`
curl -XPOST -sS -u "admin:${ROOT_PASSWORD}" "localhost:9000/api/system/shutdown/shutdown"```
@tklovett that's correct. the POST /api/system/shutdown
will call System.exit(0)
. More over the shutdown
will also flush all messages on inputs and stop receiving messages.
this is fixed in #88
I've been playing around with Graylog running in Docker on Kubernetes (via the official helm chart), and I've found that Graylog pods take a long time to terminate, then eventually terminate without logging any graceful shutdown messages. That seemed suspicious to me, and I wondered if the eventual termination was only due to the
terminationGracePeriodSeconds: 120
.If I'm not mistaken, the problem boils down to Kubernetes being unable to get its SIGTERM all the way to the Java process in the pod. Kubernetes sends a SIGTERM to the process with PID 1, which is
docker-entrypoint.sh
, but the script does not forward it to the java process. When the grace period expires, Kubernetes sends an SIGKILL to all processes in the pod, achieving the ungraceful termination.Steps to Reproduce:
Solutions
docker-entrypoint.sh
(PID 1) to trap and forward SIGTERM to thejava
child processtini
, like so:https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-terminating-with-grace https://cloud.google.com/solutions/best-practices-for-building-containers#signal-handling