Closed spuiuk closed 2 years ago
To test, I disabled ctdb on one of the running pods.
$ kubectl exec -it smbshare3-0 -c ctdb -- /bin/bash
[root@smbshare3-0 /]# ctdb nodestatus
pnn:0 10.244.1.14 OK (THIS NODE)
[root@smbshare3-0 /]# ctdb disable
[root@smbshare3-0 /]# ctdb nodestatus
pnn:0 10.244.1.14 DISABLED (THIS NODE)
And can confirm that the readiness probe fails by looking at
$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
samba-ad-server-86b7dd9856-8wj9q 1/1 Running 0 10m
smbshare3-0 3/3 Running 0 2m54s
smbshare3-1 3/3 Running 0 111s
smbshare3-0 2/3 Running 0 3m52
and
$ kubectl describe pod smbshare3-0
...
Warning Unhealthy 2m50s (x12 over 4m30s) kubelet Readiness probe failed: pnn:0 10.244.1.14 DISABLED (THIS NODE)
Checking the spec of the statefulset created.
$ kubectl edit statefulset smbshare3
...
- name: SAMBACC_CTDB
value: ctdb-is-experimental
image: quay.io/samba.org/samba-server:latest
imagePullPolicy: Always
name: ctdb
readinessProbe:
exec:
command:
- samba-container
- check
- ctdb-nodestatus
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
...
/test all
Use samba-container check ctdb-nodestatus as readiness probe for the ctdb container.
Signed-off-by: Sachin Prabhu sprabhu@redhat.com