Closed quorak closed 5 years ago
Bad cookie in table definition The cookies have to be set the same and not randomly generated for all replicas. Verify it is the same both in logs and on container filesystem.
Are you using a persistent volume claim? Persistent volumes can also be topology aware and may restrict rescheduling. Local volumes, for example, don't allow pods to be rescheduled.
thanks for the replay. I use the chart templates with their default values. the node was down and did not came up. so rescheduling to a different node was necessary. Is this supported by the chart?
You should set rabbitmqErlangCookie yourself and set persistentVolume.enabled=true if you don't want trouble reconnecting. And you can set podAntiAffinity=hard to ensure they get scheduled on different nodes.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
Is this a request for help?: would be great to understand why this happened BUG REPORT:
Master Logs after node rescheduled
Slave output (no rescheduling of pod)
Version of Helm and Kubernetes: helm: v2.10.0 kubernetes 1.9
Which chart: rabbitmq-ha-1.9.1
What happened: It looks like one node was down and kubernetes scheduled the master pod to a new node. after successsfull startup of the master node, the slaves could not reconnect. when I deleted the slaves pod. reconnection worked.
What you expected to happen: reschedule would work out of the box
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know: