osixia / docker-openldap

OpenLDAP container image 🐳🌴
MIT License
4.01k stars 974 forks source link

Database already in use with Kubernetes Deployment and PV #423

Open linkdd opened 4 years ago

linkdd commented 4 years ago

I'm using the official Helm chart to deploy LDAP on Kubernetes : https://github.com/helm/charts/tree/master/stable/openldap

The Pod is managed by a Deployment, which means:

But, when using a PersistentVolume to store the data, the new Pod fails with the following error:

2020-05-02T15:27:09.793803326Z Start OpenLDAP...
2020-05-02T15:27:09.801137848Z Waiting for OpenLDAP to start...
2020-05-02T15:27:09.817523631Z 5ead914d @(#) $OpenLDAP: slapd  (Feb  9 2019 17:02:42) $
2020-05-02T15:27:09.817547052Z  Debian OpenLDAP Maintainers <pkg-openldap-devel@lists.alioth.debian.org>
2020-05-02T15:27:09.81779976Z 5ead914d daemon: bind(8) failed errno=99 (Cannot assign requested address)
2020-05-02T15:27:09.832858681Z 5ead914d hdb_db_open: database "dc=***,dc=***": database already in use.
2020-05-02T15:27:09.832881798Z 5ead914d backend_startup_one (type=hdb, suffix="dc=***,dc=***"): bi_db_open failed! (-1)
2020-05-02T15:27:09.833524212Z 5ead914d slapd stopped.
funkypenguin commented 4 years ago

What sort of storage are you using? If it supports ReadWriteMany, you may be in the situation where the old pod will only be terminated when the new pod becomes ready.

In this case, you want to change the spec.strategy.type to Recreate, which will terminate the old pod before creating the new one.

If you're using the stable helm chart, take a look at https://github.com/helm/charts/blob/master/stable/openldap/values.yaml#L17

D

linkdd commented 4 years ago

The storage class for my persistent volume is csi-cinder-high-speed (the default for OVH managed Kubernetes clusters). But it is still in ReadWriteOnce in the values.yaml.

I don't mind a unavailability as it is not a critical service in my architecture, so I'll go with the spec.strategy.type to Recreate, thanks!

Azleal commented 4 years ago

The storage class for my persistent volume is csi-cinder-high-speed (the default for OVH managed Kubernetes clusters). But it is still in ReadWriteOnce in the values.yaml.

I don't mind a unavailability as it is not a critical service in my architecture, so I'll go with the spec.strategy.type to Recreate, thanks!

does this work?

linkdd commented 4 years ago

Yes it does, but I'll leave the issue open as it could not be a reliable solution for every one.

I'll also look later into transforming the Deployment into a StatefulSet with replication always enabled. That way each Pod will have its own PersistentVolumeClaim, and new Pods during an upgrade will just be new replicated node of the LDAP cluster (thus avoiding the issue completely).

gijoe88 commented 4 years ago

Yeah, but is it an issue on this docker image per say? I think it is more an issue on the usage of this image. Openldap itself does not seem to be ok with having two instances using the same files at the same time, that is why that upstream failsafe was put in place...