combor / k8s-mongo-labeler-sidecar

Kubernetes MongoDB sidecar that watches replica instances and sets label on primary node.
BSD 3-Clause "New" or "Revised" License
27 stars 18 forks source link

Error on labeling pod #7

Closed Peyoz closed 2 years ago

Peyoz commented 2 years ago

Hi, as first thanks for your work. I'm getting a weird issue when using your sidecar inside a bitnami replicaset, have a look below:

time="2021-11-11T10:49:56Z" level=debug msg="Hosts bson.Array[mongodb-0.mongodb-headless.default.svc.cluster.local:27017, mongo
db-1.mongodb-headless.default.svc.cluster.local:27017]"
time="2021-11-11T10:49:56Z" level=debug msg="Found 2 pods"
time="2021-11-11T10:49:56Z" level=debug msg="Setting labels map[app.kubernetes.io/component:mongodb app.kubernetes.io/instance:
mongodb app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:mongodb controller-revision-hash:mongodb-6795b9bfcf helm.sh/ch
art:mongodb-10.29.0 primary:true statefulset.kubernetes.io/pod-name:mongodb-0]"
time="2021-11-11T10:49:56Z" level=error msg="Pod \"mongodb-0\" is invalid: spec: Forbidden: pod updates may not change fields o
ther than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only 
additions to existing tolerations)\n\u00a0\u00a0core.PodSpec{\n\u00a0\u00a0\tVolumes:        []core.Volume{{Name: \"datadir\", 
VolumeSource: core.VolumeSource{PersistentVolumeClaim: &core.PersistentVolumeClaimVolumeSource{ClaimName: \"datadir-mongodb-0\"
}}}, {Name: \"scripts\", VolumeSource: core.VolumeSource{ConfigMap: &core.ConfigMapVolumeSource{LocalObjectReference: core.Loca
lObjectReference{Name: \"mongodb-scripts\"}, DefaultMode: &493}}}, {Name: \"mongodb-token-5q5ms\", VolumeSource: core.VolumeSou
rce{Secret: &core.SecretVolumeSource{SecretName: \"mongodb-token-5q5ms\", DefaultMode: &420}}}},\n\u00a0\u00a0\tInitContainers:
 nil,\n\u00a0\u00a0\tContainers: []core.Container{\n\u00a0\u00a0\t\t{\n\u00a0\u00a0\t\t\t... // 11 identical fields\n\u00a0\u00
a0\t\t\tLivenessProbe:  &core.Probe{Handler: core.Handler{Exec: &core.ExecAction{Command: []string{\"mongo\", \"--disableImplic
itSessions\", \"--eval\", \"db.adminCommand('ping')\"}}}, InitialDelaySeconds: 30, TimeoutSeconds: 5, PeriodSeconds: 10, Succes
sThreshold: 1, FailureThreshold: 6},\n\u00a0\u00a0\t\t\tReadinessProbe: &core.Probe{Handler: core.Handler{Exec: &core.ExecActio
n{Command: []string{\"bash\", \"-ec\", \"# Run the proper check depending on the version\\n[[ $(mongo --version | grep \\\"Mong
oDB shell\\\") =~ ([0-9]+\\\\.[0-9]+\\\\.[0-9]+) ]] && VERSION=${BASH_REMATCH[1]}\\n. /opt/bitnami/scripts/libversion.sh\\nVERS
ION_MAJOR=\\\"$(get_sematic_version \\\"$VERSION\\\" 1)\\\"\\nVERSION_MINOR=\\\"$(get_sematic_version \\\"$VERSION\\\" 2)\\\"\\
nVERSION_PATCH=\\\"$(get_sematic_version \\\"$VERSION\\\" 3)\\\"\\nif [[ \\\"$VERSION_MAJOR\\\" -ge 4 ]] && [[ \\\"$VERSION_MIN
OR\\\" -ge 4 ]] && [[ \\\"$VERSION_PATCH\\\" -ge 2 ]]; then\\n    mongo --disableImplicitSessions $TLS_OPTIONS --eval 'db.hello
().isWritablePrimary || db.hello().secondary' | grep -q 'true'\\nelse\\n    mongo --disableImplicitSessions $TLS_OPTIONS --eval
 'db.isMaster().ismaster || db.isMaster().secondary' | grep -q 'true'\\nfi\\n\"}}}, InitialDelaySeconds: 15, TimeoutSeconds: 5,
 PeriodSeconds: 10, SuccessThreshold: 1, FailureThreshold: 6},\n-\u00a0\t\t\tStartupProbe:   nil,\n+\u00a0\t\t\tStartupProbe: &
core.Probe{\n+\u00a0\t\t\t\tHandler: core.Handler{\n+\u00a0\t\t\t\t\tExec: &core.ExecAction{\n+\u00a0\t\t\t\t\t\tCommand: []str
ing{\n+\u00a0\t\t\t\t\t\t\t\"bash\",\n+\u00a0\t\t\t\t\t\t\t\"-ec\",\n+\u00a0\t\t\t\t\t\t\t\"mongo --disableImplicitSessions $TL
S_OPTIONS --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep -q 'true'\\n\",\n+\u00a0\t\t\t\t\t\t},\n+\u00a0\
t\t\t\t\t},\n+\u00a0\t\t\t\t},\n+\u00a0\t\t\t\tInitialDelaySeconds: 15,\n+\u00a0\t\t\t\tTimeoutSeconds:      5,\n+\u00a0\t\t\t\
tPeriodSeconds:       10,\n+\u00a0\t\t\t\tSuccessThreshold:    1,\n+\u00a0\t\t\t\tFailureThreshold:    30,\n+\u00a0\t\t\t},\n\u
00a0\u00a0\t\t\tLifecycle:              nil,\n\u00a0\u00a0\t\t\tTerminationMessagePath: \"/dev/termination-log\",\n\u00a0\u00a0
\t\t\t... // 6 identical fields\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t{Name: \"mongo-labeler\", Image: \"korenlev/k8s-mongo-labe
ler-sidecar\", Env: []core.EnvVar{{Name: \"LABEL_SELECTOR\", Value: \"app.kubernetes.io/component=mongodb,app.kubernetes.io/ins
tance=mongodb,app.kubernetes.io/name=mongodb\"}, {Name: \"NAMESPACE\", Value: \"default\"}, {Name: \"DEBUG\", Value: \"true\"}}
, VolumeMounts: []core.VolumeMount{{Name: \"mongodb-token-5q5ms\", ReadOnly: true, MountPath: \"/var/run/secrets/kubernetes.io/
serviceaccount\"}}, TerminationMessagePath: \"/dev/termination-log\", TerminationMessagePolicy: \"File\", ImagePullPolicy: \"Al
ways\"},\n\u00a0\u00a0\t},\n\u00a0\u00a0\tEphemeralContainers: nil,\n\u00a0\u00a0\tRestartPolicy:       \"Always\",\n\u00a0\u00
a0\t... // 25 identical fields\n\u00a0\u00a0}\n"

It is correctly discovering the pod and taking the right decision, but unfortunately it is unable to label the pod. I can anyway manually perform the configuration through kubectl

Can you please advise on this issue?

Best Regards Alex

combor commented 2 years ago

It's hard to read this error but it's either the k8s schema has changed or you have some RBAC problems

Peyoz commented 2 years ago

I suppose that is something related to the schema, check it below:

time="2021-11-11T10:49:56Z" level=error msg="Pod \"mongodb-0\" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only 
additions to existing tolerations)

looks like there is something missing/more on the spec

presidenten commented 2 years ago

Did anyone manage to solve this? We have the same problem.

combor commented 2 years ago

This is fixed now.