Open bsakweson opened 6 years ago
Can you perform soms tests to see if the pods can see eachother?
for i in 0 1; do kubectl exec POD_NAME-$i -- sh -c 'hostname'; done
And execute a dns lookup in this container:
kubectl run -i --tty --image busybox dns-test --restart=Never --rm /bin/sh
So in your example that would be:
nslookup ldap-0
Since you are using a stateful set, do you have a headless service in place. This service is required for the pods in a Stateful set (sts) to reach each other via their DNS names.
E.G for a headless service:
apiVersion: v1
kind: Service
metadata:
name: openldap-multi # NOTE this must be the same name as your STS
labels:
app: openldap-multi
spec:
ports:
- port: 389
name: ldap
clusterIP: None
selector:
app: openldap-multi
Yes, I have a headless service in place. I also performed the test you asked above and here are my results.
/ # nslookup ldap-0
Server: 10.233.0.3
Address 1: 10.233.0.3 kube-dns.kube-system.svc.cluster.local
Name: ldap-0
Address 1: xxx.xxx.xxx.130
Address 2: xxx.xxx.xxx.44
/ # nslookup ldap-1
Server: 10.233.0.3
Address 1: 10.233.0.3 kube-dns.kube-system.svc.cluster.local
Name: ldap-1
Address 1: xxx.xxx.xxx.130
Address 2: xxx.xxx.xxx.44
/ # nslookup ldap-2
Server: 10.233.0.3
Address 1: 10.233.0.3 kube-dns.kube-system.svc.cluster.local
Name: ldap-2
Address 1: xxx.xxx.xxx.130
Address 2: xxx.xxx.xxx.44
Those masked IPs are my public IPs.
Are the IP addresses op ldap-0, ldap-1 and ldap-2 different?
No, they are exactly the same.
On May 9, 2018, at 2:32 PM, Erik notifications@github.com wrote:
Are the IP addresses op ldap-0, ldap-1 and ldap-2 different?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
I assume that the pods should replicate on their internal IP's or is there a reason why they are synchronising via the public network?
Great question, so, because this is running on kubernetes, each pod has a corresponding service. I tried using those service but could not get it to work. I will try again later today and see how that goes. I surely will report back with my findings.
On May 11, 2018, at 1:29 PM, Erik notifications@github.com wrote:
I assume that the pods should replicate on their internal IP's or is there a reason why they are synchronising via the public network?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
I still can't get this working. Has anyone been able to get this working on kubernetes with statefulsets, if so can you share?
same issue met, I run the Multi master openldap on two VMs with docker, follows are the command
docker run --name ldap1 --restart=always --hostname ldap1.example.com --env LDAP_BASE_DN="cn=admin,dc=abc,dc=com" --env LDAP_REPLICATION_HOSTS="#PYTHON2BASH:['ldap://ldap1.example.com','ldap://ldap2.example.com']" --env LDAP_REPLICATION=true --env LDAP_TLS_VERIFY_CLIENT="never" --env LDAP_DOMAIN="abc.com" --env LDAP_ADMIN_PASSWORD="12345" -p 389:389 --detach osixia/openldap:1.2.1 --loglevel debug --copy-service
docker run --name ldap2 --restart=always --hostname ldap2.example.com --env LDAP_BASE_DN="cn=admin,dc=abc,dc=com" --env LDAP_REPLICATION_HOSTS="#PYTHON2BASH:['ldap://ldap1.example.com','ldap://ldap2.example.com']" --env LDAP_REPLICATION=true --env LDAP_TLS_VERIFY_CLIENT="never" --env LDAP_DOMAIN="abc.com" --env LDAP_ADMIN_PASSWORD="12345" -p 389:389 --detach osixia/openldap:1.2.1 --loglevel debug --copy-service
And hosts added in both containers, they can query each other in the container via ldapsearch. But when I add entry in one ldap, the other don't have it, the replication doesn't take effect.
@bsakweson @Dexyon Do you have any progress on it?Thank you very much.
@bsakweson @Dexyon My issue has been resolved: LDAP_BASE_DN was not set correctly, when LDAP_BASE_DN set to "dc=abc,dc=com", it work. Hope this can do some help
For me, it got fixed by setting the HOSTNAME env var to include the "service name". Ie: ` env:
And then the LDAP_REPLICATION_HOSTS to use the same domain name format as above.
Hope this helps!
With 3 hosts its not working for me, I'm trying to do something like
LDAP_CID=$(docker run --hostname ldap.example.org --env LDAP_REPLICATION=true --env LDAP_REPLICATION_HOSTS="#PYTHON2BASH:['ldap://ldap2.example.org','ldap://ldap3.example.org']" --detach osixia/openldap:1.2.1)
LDAP_IP=$(docker inspect -f "{{ .NetworkSettings.IPAddress }}" $LDAP_CID)
LDAP2_CID=$(docker run --hostname ldap2.example.org --env LDAP_REPLICATION=true --env LDAP_REPLICATION_HOSTS="#PYTHON2BASH:['ldap://ldap.example.org','ldap://ldap3.example.org']" --detach osixia/openldap:1.2.1)
LDAP2_IP=$(docker inspect -f "{{ .NetworkSettings.IPAddress }}" $LDAP2_CID)
LDAP3_CID=$(docker run --hostname ldap3.example.org --env LDAP_REPLICATION=true --env LDAP_REPLICATION_HOSTS="#PYTHON2BASH:['ldap://ldap.example.org','ldap://ldap2.example.org']" --detach osixia/openldap:1.2.1)
LDAP3_IP=$(docker inspect -f "{{ .NetworkSettings.IPAddress }}" $LDAP3_CID)
docker exec $LDAP_CID bash -c "echo $LDAP2_IP ldap2.example.org >> /etc/hosts"
docker exec $LDAP_CID bash -c "echo $LDAP3_IP ldap3.example.org >> /etc/hosts"
docker exec $LDAP2_CID bash -c "echo $LDAP_IP ldap.example.org >> /etc/hosts"
docker exec $LDAP2_CID bash -c "echo $LDAP3_IP ldap3.example.org >> /etc/hosts"
docker exec $LDAP3_CID bash -c "echo $LDAP2_IP ldap3.example.org >> /etc/hosts"
docker exec $LDAP3_CID bash -c "echo $LDAP_IP ldap.example.org >> /etc/hosts"
I thought it would be easier than baking my own image though
For me, it got fixed by setting the HOSTNAME env var to include the "service name". Ie:
env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: HOSTNAME value: "$(MY_POD_NAME).{{ .Release.Name }}-ldap.{{ .Release.Namespace }}"
Where {{ .Release.Name }}-ldap matches the service name.And then the LDAP_REPLICATION_HOSTS to use the same domain name format as above.
Hope this helps!
It works for me, thanks ! But then Stateful Pod is restarted OpenLdap do not replicate anymore, how did you fix it ?
@rmaugusto / @tuwid You should try to avoid IP's in your config. Since kubernetes can't (and should not) guarantee that you will get the same IP after a reboot.
In case you use a STS, you should use the DNS names of your services instead of IP's. We are using the same approach and haven't had any issues for 6 months (and lot's of reboots).
You should have the following config for you STS:
A service to the pods to your cluster, so you can access the ldap.
apiVersion: v1
kind: Service
metadata:
name: user-management-internal
labels:
app: openldap-multi
spec:
ports:
- port: 389
name: unsecure
selector:
app: openldap-multi
A service to create DNS names for you STS pods (headless service)
apiVersion: v1
kind: Service
metadata:
name: user-management-sts
labels:
app: openldap-multi
spec:
ports:
- port: 389
name: unsecure
clusterIP: None
selector:
app: openldap-multi
And in your STS you should have the following ENV variables:
- name: LDAP_REPLICATION
value: "true"
- name: LDAP_REPLICATION_HOSTS
value: "#PYTHON2BASH:['ldap://user-management-sts-0.user-management-sts','ldap://user-management-sts-1.user-management-sts','ldap://user-management-sts-2.user-management-sts']"
- name: LDAP_REPLICATION_CONFIG_SYNCPROV
value: "binddn=\"cn=admin,cn=config\" bindmethod=simple credentials=$LDAP_CONFIG_PASSWORD searchbase=\"cn=config\" type=refreshAndPersist retry=\"30 +\" timeout=1"
- name: LDAP_REPLICATION_DB_SYNCPROV
value: "binddn=\"cn=admin,$LDAP_BASE_DN\" bindmethod=simple credentials=$LDAP_ADMIN_PASSWORD searchbase=\"$LDAP_BASE_DN\" type=refreshAndPersist interval=00:00:00:01 retry=\"60 +\" timeout=1"
- name: LDAP_REMOVE_CONFIG_AFTER_SETUP
value: "false"
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: HOSTNAME
value: $(POD_NAME).user-management-sts
The naming of your pods via DNS is via the config: pod-name.headless-service-name
Don't forget you have to create the POD_NAME
and HOSTNAME
env variables as well! So you container has a notion of it's own name
Even though this thread is a bit older, I have had a lot of struggle with replication. In the end I succeeded when I wrote the full DNS name in Helm values file. Here is an example with three pods:
LDAP_REPLICATION_HOSTS: "#PYTHON2BASH:['ldap://ldap-0.ldap.<namespace>.svc.cluster.local','ldap://ldap-1.ldap.<namespace>.svc.cluster.local','ldap://ldap-2.ldap.<namespace>.svc.cluster.local']"
ldap-0 is the pod name and ldap is the name of the headless service.
Team,
I am in the processing of implementing a helm chart to deploy ldap in kubernetes. I used a statefulset rather than multiple deployments. Here are some information about my environment:
I have successful create the helm chart and everything looks good except for the fact that I am not able to get replication to work.
Below is my values.yaml
Here is my statefulset:
I do not intend to use ssl or tls because communication between nodes is done behind a firewall and should use cluster IPs.
I am able to connect to my three pods using with apacheDS through their respective service IP addresses. However, when I create a resource on any of my multi-master node, that newly created resource is not replicated across the remaining two nodes.
All pods works independently but replication does not work. What I am I missing?