Open linkdd opened 4 years ago
What sort of storage are you using? If it supports ReadWriteMany, you may be in the situation where the old pod will only be terminated when the new pod becomes ready.
In this case, you want to change the spec.strategy.type
to Recreate
, which will terminate the old pod before creating the new one.
If you're using the stable helm chart, take a look at https://github.com/helm/charts/blob/master/stable/openldap/values.yaml#L17
D
The storage class for my persistent volume is csi-cinder-high-speed
(the default for OVH managed Kubernetes clusters). But it is still in ReadWriteOnce
in the values.yaml
.
I don't mind a unavailability as it is not a critical service in my architecture, so I'll go with the spec.strategy.type
to Recreate
, thanks!
The storage class for my persistent volume is
csi-cinder-high-speed
(the default for OVH managed Kubernetes clusters). But it is still inReadWriteOnce
in thevalues.yaml
.I don't mind a unavailability as it is not a critical service in my architecture, so I'll go with the
spec.strategy.type
toRecreate
, thanks!
does this work?
Yes it does, but I'll leave the issue open as it could not be a reliable solution for every one.
I'll also look later into transforming the Deployment
into a StatefulSet
with replication always enabled. That way each Pod will have its own PersistentVolumeClaim
, and new Pods during an upgrade will just be new replicated node of the LDAP cluster (thus avoiding the issue completely).
Yeah, but is it an issue on this docker image per say? I think it is more an issue on the usage of this image. Openldap itself does not seem to be ok with having two instances using the same files at the same time, that is why that upstream failsafe was put in place...
I'm using the official Helm chart to deploy LDAP on Kubernetes : https://github.com/helm/charts/tree/master/stable/openldap
The Pod is managed by a Deployment, which means:
But, when using a PersistentVolume to store the data, the new Pod fails with the following error: