osixia / docker-openldap

OpenLDAP container image 🐳🌴
MIT License
4.02k stars 973 forks source link

Multi Master Replication not working #213

Open bsakweson opened 6 years ago

bsakweson commented 6 years ago

Team,

I am in the processing of implementing a helm chart to deploy ldap in kubernetes. I used a statefulset rather than multiple deployments. Here are some information about my environment:

I have successful create the helm chart and everything looks good except for the fact that I am not able to get replication to work.

Below is my values.yaml

replicaCount: 3

image:
  repository: osixia/openldap
  tag: 1.2.0
  pullPolicy: IfNotPresent

service:
  type: LoadBalancer
  ports:
    ldap: 389
    ldaps: 636

ingress:
  enabled: false
  annotations:
    kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  path: /
  hosts:
    - ldap.bakalr.com
  #tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

env:
  logLevel: 256
  organization: "Bakalr Inc."
  domain: "bakalr.com"
  adminPassword: ""
  configPassword: ""
  readonlyUser: "false"
  username: "readonly"
  password: ""
  rfc2307bis: "false"
  backend: mdb
  tls: "false"
  tlsCrt: ldap.crt
  tlsKey: ldap.key
  tlsCaCert: ca.crt
  tlsForce: "false"
  tlsCipher: "SECURE256:+SECURE128:-VERS-TLS-ALL:+VERS-TLS1.2:-RSA:-DHE-DSS:-CAMELLIA-128-CBC:-CAMELLIA-256-CBC"
  tlsVerifyClient: "demand"
  replication: "true"
  replicationHosts: "#PYTHON2BASH:['ldap://ldap-0, ldap://ldap-1, ldap://ldap-2']"
  keepExistingConfig: "true"
  removeConfigAfterSetup: "true"
  sslHelperPrefix: ldap

persistence:
  storageClassName: glusterfs
  accessModes:
    - ReadWriteOnce
  size:
    data: 100Gi
    config: 500Mi
    certs: 50Mi

resources:
  limits:
    cpu: 100m
    memory: 512Mi
  requests:
    cpu: 100m
    memory: 256Mi

nodeSelector: {}

tolerations: []

affinity: {}

Here is my statefulset:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: {{ template "ldap.fullname" . }}
  labels:
    app: {{ template "ldap.name" . }}
    chart: {{ template "ldap.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  serviceName: {{ template "ldap.fullname" . }}-master
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ template "ldap.name" . }}
  template:
    metadata:
      labels:
        app: {{ template "ldap.name" . }}
    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          ##args: ["--copy-service"]
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
            - name: LDAP_REPLICATION_CONFIG_SYNCPROV
              value: "binddn=\"cn=admin,cn=config\" bindmethod=simple credentials=$LDAP_CONFIG_PASSWORD searchbase=\"cn=config\" type=refreshAndPersist retry=\"60 +\" timeout=1 starttls=critical"
            - name: LDAP_REPLICATION_DB_SYNCPROV
              value: "binddn=\"cn=admin,$LDAP_BASE_DN\" bindmethod=simple credentials=$LDAP_ADMIN_PASSWORD searchbase=\"$LDAP_BASE_DN\" type=refreshAndPersist interval=00:00:00:10 retry=\"60 +\" timeout=1 starttls=critical"
          envFrom:
            - configMapRef:
                name: {{ template "ldap.fullname" . }}
            - secretRef:
                name: {{ template "ldap.fullname" . }}
          volumeMounts:
            - name: data
              mountPath: /var/lib/ldap
            - name: config
              mountPath: /etc/ldap/slapd.d
            - name: certs
              mountPath: /container/service/slapd/assets/certs
          securityContext:
            runAsUser: 0
          ports:
            - name: ldap
              containerPort: 389
              protocol: TCP
            - name: ldaps
              containerPort: 636
              protocol: TCP
          readinessProbe:
            tcpSocket:
              port: 389
            initialDelaySeconds: 60
            periodSeconds: 10
          livenessProbe:
            tcpSocket:
              port: 389
            initialDelaySeconds: 60
            periodSeconds: 20
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- with .Values.nodeSelector }}
      nodeSelector:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.affinity }}
      affinity:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.tolerations }}
      tolerations:
{{ toYaml . | indent 8 }}
    {{- end }}
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes: {{ .Values.persistence.accessModes }}
        storageClassName:  {{ .Values.persistence.storageClassName }}
        resources:
          requests:
            storage: {{ .Values.persistence.size.data }}
    - metadata:
        name: config
      spec:
        accessModes: {{ .Values.persistence.accessModes }}
        storageClassName:  {{ .Values.persistence.storageClassName }}
        resources:
          requests:
            storage: {{ .Values.persistence.size.config }}
    - metadata:
        name: certs
      spec:
        accessModes: {{ .Values.persistence.accessModes }}
        storageClassName:  {{ .Values.persistence.storageClassName }}
        resources:
          requests:
            storage: {{ .Values.persistence.size.certs }}

I do not intend to use ssl or tls because communication between nodes is done behind a firewall and should use cluster IPs.

I am able to connect to my three pods using with apacheDS through their respective service IP addresses. However, when I create a resource on any of my multi-master node, that newly created resource is not replicated across the remaining two nodes.

All pods works independently but replication does not work. What I am I missing?

Dexyon commented 6 years ago

Can you perform soms tests to see if the pods can see eachother?

for i in 0 1; do kubectl exec POD_NAME-$i -- sh -c 'hostname'; done

And execute a dns lookup in this container:

kubectl run -i --tty --image busybox dns-test --restart=Never --rm /bin/sh

So in your example that would be:

nslookup ldap-0

Since you are using a stateful set, do you have a headless service in place. This service is required for the pods in a Stateful set (sts) to reach each other via their DNS names.

E.G for a headless service:

apiVersion: v1
kind: Service
metadata:
  name: openldap-multi # NOTE this must be the same name as your STS
  labels:
    app: openldap-multi
spec:
  ports:
  - port: 389
    name: ldap
  clusterIP: None
  selector:
    app: openldap-multi
bsakweson commented 6 years ago

Yes, I have a headless service in place. I also performed the test you asked above and here are my results.

/ # nslookup ldap-0
Server:    10.233.0.3
Address 1: 10.233.0.3 kube-dns.kube-system.svc.cluster.local

Name:      ldap-0
Address 1: xxx.xxx.xxx.130
Address 2: xxx.xxx.xxx.44

/ # nslookup ldap-1
Server:    10.233.0.3
Address 1: 10.233.0.3 kube-dns.kube-system.svc.cluster.local

Name:      ldap-1
Address 1: xxx.xxx.xxx.130
Address 2: xxx.xxx.xxx.44

/ # nslookup ldap-2
Server:    10.233.0.3
Address 1: 10.233.0.3 kube-dns.kube-system.svc.cluster.local

Name:      ldap-2
Address 1: xxx.xxx.xxx.130
Address 2: xxx.xxx.xxx.44

Those masked IPs are my public IPs.

Dexyon commented 6 years ago

Are the IP addresses op ldap-0, ldap-1 and ldap-2 different?

bsakweson commented 6 years ago

No, they are exactly the same.

On May 9, 2018, at 2:32 PM, Erik notifications@github.com wrote:

Are the IP addresses op ldap-0, ldap-1 and ldap-2 different?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

Dexyon commented 6 years ago

I assume that the pods should replicate on their internal IP's or is there a reason why they are synchronising via the public network?

bsakweson commented 6 years ago

Great question, so, because this is running on kubernetes, each pod has a corresponding service. I tried using those service but could not get it to work. I will try again later today and see how that goes. I surely will report back with my findings.

On May 11, 2018, at 1:29 PM, Erik notifications@github.com wrote:

I assume that the pods should replicate on their internal IP's or is there a reason why they are synchronising via the public network?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

bsakweson commented 6 years ago

I still can't get this working. Has anyone been able to get this working on kubernetes with statefulsets, if so can you share?

Lazast commented 6 years ago

same issue met, I run the Multi master openldap on two VMs with docker, follows are the command docker run --name ldap1 --restart=always --hostname ldap1.example.com --env LDAP_BASE_DN="cn=admin,dc=abc,dc=com" --env LDAP_REPLICATION_HOSTS="#PYTHON2BASH:['ldap://ldap1.example.com','ldap://ldap2.example.com']" --env LDAP_REPLICATION=true --env LDAP_TLS_VERIFY_CLIENT="never" --env LDAP_DOMAIN="abc.com" --env LDAP_ADMIN_PASSWORD="12345" -p 389:389 --detach osixia/openldap:1.2.1 --loglevel debug --copy-service

docker run --name ldap2 --restart=always --hostname ldap2.example.com --env LDAP_BASE_DN="cn=admin,dc=abc,dc=com" --env LDAP_REPLICATION_HOSTS="#PYTHON2BASH:['ldap://ldap1.example.com','ldap://ldap2.example.com']" --env LDAP_REPLICATION=true --env LDAP_TLS_VERIFY_CLIENT="never" --env LDAP_DOMAIN="abc.com" --env LDAP_ADMIN_PASSWORD="12345" -p 389:389 --detach osixia/openldap:1.2.1 --loglevel debug --copy-service

And hosts added in both containers, they can query each other in the container via ldapsearch. But when I add entry in one ldap, the other don't have it, the replication doesn't take effect.

@bsakweson @Dexyon Do you have any progress on it?Thank you very much.

@bsakweson @Dexyon My issue has been resolved: LDAP_BASE_DN was not set correctly, when LDAP_BASE_DN set to "dc=abc,dc=com", it work. Hope this can do some help

obellagamba commented 6 years ago

For me, it got fixed by setting the HOSTNAME env var to include the "service name". Ie: ` env:

And then the LDAP_REPLICATION_HOSTS to use the same domain name format as above.

Hope this helps!

tuwid commented 6 years ago

With 3 hosts its not working for me, I'm trying to do something like

LDAP_CID=$(docker run --hostname ldap.example.org --env LDAP_REPLICATION=true --env LDAP_REPLICATION_HOSTS="#PYTHON2BASH:['ldap://ldap2.example.org','ldap://ldap3.example.org']" --detach osixia/openldap:1.2.1)
LDAP_IP=$(docker inspect -f "{{ .NetworkSettings.IPAddress }}" $LDAP_CID)
LDAP2_CID=$(docker run --hostname ldap2.example.org --env LDAP_REPLICATION=true --env LDAP_REPLICATION_HOSTS="#PYTHON2BASH:['ldap://ldap.example.org','ldap://ldap3.example.org']" --detach osixia/openldap:1.2.1)
LDAP2_IP=$(docker inspect -f "{{ .NetworkSettings.IPAddress }}" $LDAP2_CID)
LDAP3_CID=$(docker run --hostname ldap3.example.org --env LDAP_REPLICATION=true --env LDAP_REPLICATION_HOSTS="#PYTHON2BASH:['ldap://ldap.example.org','ldap://ldap2.example.org']" --detach osixia/openldap:1.2.1)
LDAP3_IP=$(docker inspect -f "{{ .NetworkSettings.IPAddress }}" $LDAP3_CID)

docker exec $LDAP_CID bash -c "echo $LDAP2_IP ldap2.example.org >> /etc/hosts"
docker exec $LDAP_CID bash -c "echo $LDAP3_IP ldap3.example.org >> /etc/hosts"
docker exec $LDAP2_CID bash -c "echo $LDAP_IP ldap.example.org >> /etc/hosts"
docker exec $LDAP2_CID bash -c "echo $LDAP3_IP ldap3.example.org >> /etc/hosts"
docker exec $LDAP3_CID bash -c "echo $LDAP2_IP ldap3.example.org >> /etc/hosts"
docker exec $LDAP3_CID bash -c "echo $LDAP_IP ldap.example.org >> /etc/hosts"

I thought it would be easier than baking my own image though

rmaugusto commented 5 years ago

For me, it got fixed by setting the HOSTNAME env var to include the "service name". Ie: env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: HOSTNAME value: "$(MY_POD_NAME).{{ .Release.Name }}-ldap.{{ .Release.Namespace }}" Where {{ .Release.Name }}-ldap matches the service name.

And then the LDAP_REPLICATION_HOSTS to use the same domain name format as above.

Hope this helps!

It works for me, thanks ! But then Stateful Pod is restarted OpenLdap do not replicate anymore, how did you fix it ?

Dexyon commented 5 years ago

@rmaugusto / @tuwid You should try to avoid IP's in your config. Since kubernetes can't (and should not) guarantee that you will get the same IP after a reboot.

In case you use a STS, you should use the DNS names of your services instead of IP's. We are using the same approach and haven't had any issues for 6 months (and lot's of reboots).

You should have the following config for you STS:

A service to the pods to your cluster, so you can access the ldap.

apiVersion: v1
kind: Service
metadata:
  name: user-management-internal
  labels:
    app: openldap-multi
spec:
  ports:
  - port: 389 
    name: unsecure 
  selector:
    app: openldap-multi        

A service to create DNS names for you STS pods (headless service)

apiVersion: v1
kind: Service
metadata:
  name: user-management-sts
  labels:
    app: openldap-multi
spec:
  ports:
  - port: 389
    name: unsecure
  clusterIP: None
  selector:
    app: openldap-multi

And in your STS you should have the following ENV variables:

        - name: LDAP_REPLICATION
          value: "true"
        - name: LDAP_REPLICATION_HOSTS
          value: "#PYTHON2BASH:['ldap://user-management-sts-0.user-management-sts','ldap://user-management-sts-1.user-management-sts','ldap://user-management-sts-2.user-management-sts']"
        - name: LDAP_REPLICATION_CONFIG_SYNCPROV
          value: "binddn=\"cn=admin,cn=config\" bindmethod=simple credentials=$LDAP_CONFIG_PASSWORD searchbase=\"cn=config\" type=refreshAndPersist retry=\"30 +\" timeout=1"
        - name: LDAP_REPLICATION_DB_SYNCPROV
          value: "binddn=\"cn=admin,$LDAP_BASE_DN\" bindmethod=simple credentials=$LDAP_ADMIN_PASSWORD searchbase=\"$LDAP_BASE_DN\" type=refreshAndPersist interval=00:00:00:01 retry=\"60 +\" timeout=1"
        - name: LDAP_REMOVE_CONFIG_AFTER_SETUP
          value: "false"
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: HOSTNAME
          value: $(POD_NAME).user-management-sts          

The naming of your pods via DNS is via the config: pod-name.headless-service-name Don't forget you have to create the POD_NAME and HOSTNAME env variables as well! So you container has a notion of it's own name

Aldiuser commented 3 years ago

Even though this thread is a bit older, I have had a lot of struggle with replication. In the end I succeeded when I wrote the full DNS name in Helm values file. Here is an example with three pods:

LDAP_REPLICATION_HOSTS: "#PYTHON2BASH:['ldap://ldap-0.ldap.<namespace>.svc.cluster.local','ldap://ldap-1.ldap.<namespace>.svc.cluster.local','ldap://ldap-2.ldap.<namespace>.svc.cluster.local']"

ldap-0 is the pod name and ldap is the name of the headless service.