Open sergey-safarov opened 5 years ago
I can do it by adding custom readiness script and jq utility to docker image. But think this useful feature for project and community.
I know it isn't what you're asking for, but: have you seen the Helm chart written by @kocolosk that includes the sidecar for this purpose?
Actually, if you use the seedlist functionality implemented in #1658 then /_up
turns into a readiness probe; i.e., it will return 404 until the internal system databases are replicated to the new Pod and then will flip to 200 indicating its ready to handle requests.
That was an accidental close on my part. @sergey-safarov , does what I described match what you were thinking of?
yes, I can see feature exist. I updated my yaml files to
Service
# file contains database headless service
# creates kubernetes dns records for database daemons
# required for database nodes discovery
apiVersion: v1
kind: Service
metadata:
name: db
spec:
type: ClusterIP
clusterIP: None
publishNotReadyAddresses: true
selector:
app: db
StateFulSet
# file contains database daemons
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: db
labels:
app: db
spec:
podManagementPolicy: Parallel
serviceName: db
replicas: 5
selector:
matchLabels:
app: db
template:
metadata:
labels:
app: db
spec:
restartPolicy: Always
containers:
- name: node
image: couchdb:2.3.1
imagePullPolicy: IfNotPresent
env:
- name: NODE_NETBIOS_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NODENAME
value: $(NODE_NETBIOS_NAME).db
- name: COUCHDB_SECRET
value: monster
- name: ERL_FLAGS
value: "-name couchdb"
- name: ERL_FLAGS
value: "-setcookie monster"
command:
- /bin/sh
args:
- "-c"
- |
sed -i -E -e '/^\s+exec gosu couchdb/i printf "\\n[cluster]\\nseedlist = couchdb@db-0.db,couchdb@db-1.db,couchdb@db-2.db,couchdb@db-3.db,couchdb@db-4.db\\n" >> /opt/couchdb/etc/local.d/docker.ini' /docker-entrypoint.sh
tini -- /docker-entrypoint.sh /opt/couchdb/bin/couchdb
volumeMounts:
- name: pvc
mountPath: /opt/couchdb/data
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 5984
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /_up
port: 5984
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
volumeClaimTemplates:
- metadata:
name: pvc
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 128Gi
volumeName: db
And now content of /opt/couchdb/etc/local.d/docker.ini
[couch_httpd_auth]
secret = monster
[cluster]
seedlist = couchdb@db-0.db,couchdb@db-1.db,couchdb@db-2.db,couchdb@db-3.db,couchdb@db-4.db
[couchdb]
uuid = 75ea5bf7bab92649b9bb53fde29ea091
Until quorum is reached pods is not ready
[safarov@safarov-dell yaml]$ kubectl get pods -l app=db
NAME READY STATUS RESTARTS AGE
db-0 0/1 Running 0 22m
db-1 0/1 Running 0 22m
db-2 0/1 Running 0 22m
db-3 0/1 Running 0 22m
db-4 0/1 Running 0 22m
When quorum is reached then
[safarov@safarov-dell yaml]$ kubectl get pods -l app=db
NAME READY STATUS RESTARTS AGE
db-0 1/1 Running 0 22m
db-1 1/1 Running 0 22m
db-2 1/1 Running 0 22m
db-3 1/1 Running 0 22m
db-4 1/1 Running 0 22m
But some nodes cannot see each other
[safarov@safarov-dell yaml]$ kubectl exec -it db-0 -- /bin/bash
root@db-0:/# curl http://db-0.db:5984/_membership
{"all_nodes":["couchdb@db-0.db","couchdb@db-3.db","couchdb@db-4.db"],"cluster_nodes":["couchdb@db-0.db","couchdb@db-1.db","couchdb@db-2.db","couchdb@db-3.db","couchdb@db-4.db"]}
root@db-0:/# curl http://db-1.db:5984/_membership
{"all_nodes":["couchdb@db-1.db","couchdb@db-2.db","couchdb@db-4.db"],"cluster_nodes":["couchdb@db-0.db","couchdb@db-1.db","couchdb@db-2.db","couchdb@db-3.db","couchdb@db-4.db"]}
root@db-0:/# curl http://db-2.db:5984/_membership
{"all_nodes":["couchdb@db-1.db","couchdb@db-2.db","couchdb@db-4.db"],"cluster_nodes":["couchdb@db-0.db","couchdb@db-1.db","couchdb@db-2.db","couchdb@db-3.db","couchdb@db-4.db"]}
root@db-0:/# curl http://db-3.db:5984/_membership
{"all_nodes":["couchdb@db-0.db","couchdb@db-3.db","couchdb@db-4.db"],"cluster_nodes":["couchdb@db-0.db","couchdb@db-1.db","couchdb@db-2.db","couchdb@db-3.db","couchdb@db-4.db"]}
root@db-0:/# curl http://db-4.db:5984/_membership
{"all_nodes":["couchdb@db-0.db","couchdb@db-1.db","couchdb@db-2.db","couchdb@db-3.db","couchdb@db-4.db"],"cluster_nodes":["couchdb@db-0.db","couchdb@db-1.db","couchdb@db-2.db","couchdb@db-3.db","couchdb@db-4.db"]}
Only db-4.db
can see all peers, but not others
Think this case related to https://github.com/apache/couchdb/issues/2102
Summary
For node readiness check required simple test that cluster quorum exist.
Desired Behaviour
Say request
Will return response that have
quorum_present
string. Then this node may be used to service clients request. And kubernetes route client request to this node.Possible Solution
I not have
Additional context
Kubernetes database and client pods starts in random order. Need to be sure that quorum exist before client will connected to CouchDB cluster.