jp-gouin / helm-openldap

Helm chart of Openldap in High availability with multi-master replication and PhpLdapAdmin and Ltb-Passwd
Apache License 2.0
199 stars 118 forks source link

OpenLDAP container stops with error "read_config: no serverID / URL match found. Check slapd -h arguments." #79

Closed crazyelectron-io closed 1 year ago

crazyelectron-io commented 1 year ago

Describe the bug After deploying with the provided Helm chart, the two OpenLDAP pods (openldap-0 and openldap-1) fail with the stated error. The values.yaml file used:

global:
  imageRegistry: ""
  imagePullSecrets: []
  storageClass: "longhorn"
  ldapDomain: "{{ traefik_domain }}"
  adminPassword: Not@SecurePassw0rd
  configPassword: Not@SecurePassw0rd
clusterDomain: "{{ traefik_domain }}"
image:
  repository: osixia/openldap
  tag: 1.5.0
  pullPolicy: Always
  pullSecrets: []
logLevel: debug
customTLS:
  enabled: false
service:
  annotations: {}
  ldapPort: 389
  sslLdapPort: 636
  externalIPs: []
  type: ClusterIP
  sessionAffinity: None
env:
 LDAP_LOG_LEVEL: "256"
 LDAP_ORGANISATION: "Moerman"
 LDAP_READONLY_USER: "false"
 LDAP_READONLY_USER_USERNAME: "readonly"
 LDAP_READONLY_USER_PASSWORD: "readonly"
 LDAP_RFC2307BIS_SCHEMA: "false"
 LDAP_BACKEND: "mdb"
 LDAP_TLS: "true"
 LDAP_TLS_CRT_FILENAME: "tls.crt"
 LDAP_TLS_KEY_FILENAME: "tls.key"
 LDAP_TLS_DH_PARAM_FILENAME: "dhparam.pem"
 LDAP_TLS_CA_CRT_FILENAME: "ca.crt"
 LDAP_TLS_ENFORCE: "false"
 LDAP_TLS_REQCERT: "never"
 KEEP_EXISTING_CONFIG: "false"
 LDAP_REMOVE_CONFIG_AFTER_SETUP: "true"
 LDAP_SSL_HELPER_PREFIX: "ldap"
 LDAP_TLS_VERIFY_CLIENT: "never"
 LDAP_TLS_PROTOCOL_MIN: "3.0"
 LDAP_TLS_CIPHER_SUITE: "NORMAL"
pdb:
  enabled: false
  minAvailable: 1
  maxUnavailable: ""
customFileSets: []
replication:
  enabled: true
  clusterName: "{{ traefik_domain }}"
  retry: 60
  timeout: 1
  interval: 00:00:00:10
  starttls: "critical"
  tls_reqcert: "never"
persistence:
  enabled: true
  storageClass: "longhorn"
  accessModes:
    - ReadWriteOnce
  size: 1Gi
podSecurityContext:
  enabled: true
  fsGroup: 1001
containerSecurityContext:
  enabled: false
  runAsUser: 1001
  runAsNonRoot: true

serviceAccount:
  create: true
  name: ""
volumePermissions:
  enabled: false
  image:
    registry: docker.io
    repository: bitnami/bitnami-shell
    tag: 10-debian-10
    pullPolicy: IfNotPresent
    ## Optionally specify an array of imagePullSecrets.
    ## Secrets must be manually created in the namespace.
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## e.g:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []
  command: {}
  resources:
    limits: {}
    requests: {}
  containerSecurityContext:
    runAsUser: 0

To Reproduce Steps to reproduce the behavior:

  1. Fresh deploy using helm
  2. Check the log of the pod
  3. See error
jp-gouin commented 1 year ago

Hi @crazyelectron-io ,

did you change the dns configurations of your Kubernetes cluster ? E.g https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/

your configuration suggest that it’s the case since you override clusterDomain and replication.clusterName This mean that to resolve the service , the app will use <sts#>.<appName>.<namespace>.{{trafficDomain}} instead of <sts#>.<appName>.<namespace>.cluster.local

is this what you expect ?

hv15 commented 1 year ago

@jp-gouin thanks for pointing out replication.clusterName! Setting this to my K8s domain fixed the issue for me.