Closed liuziyuan closed 1 year ago
Hi @liuziyuan can you post the values you use ? Can you also describe services as well ?
my ldap values.yaml file as above,
# Default values for openldap.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# Global Docker image parameters
# Please, note that this will override the image parameters, including dependencies, configured to use the global value
# Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass
global:
imageRegistry: ""
## E.g.
## imagePullSecrets:
## - myRegistryKeySecretName
##
#imagePullSecrets: [""]
## ldapDomain , can be explicit (e.g dc=toto,c=ca) or domain based (e.g example.com)
ldapDomain: "example.com"
# Specifies an existing secret to be used for admin and config user passwords. The expected key are LDAP_ADMIN_PASSWORD and LDAP_CONFIG_ADMIN_PASSWORD.
# existingSecret: ""
## Default Passwords to use, stored as a secret. Not used if existingSecret is set.
adminPassword: Not@SecurePassw0rd
configPassword: Not@SecurePassw0rd
ldapPort: 389
sslLdapPort: 636
## @section Common parameters
## @param kubeVersion Override Kubernetes version
##
kubeVersion: ""
## @param nameOverride String to partially override common.names.fullname
##
nameOverride: ""
## @param fullnameOverride String to fully override common.names.fullname
##
fullnameOverride: ""
## @param commonLabels Labels to add to all deployed objects
##
commonLabels: {}
## @param commonAnnotations Annotations to add to all deployed objects
##
commonAnnotations: {}
## @param clusterDomain Kubernetes cluster domain name
##
clusterDomain: cluster.local
## @param extraDeploy Array of extra objects to deploy with the release
##
extraDeploy: []
replicaCount: 1
image:
# From repository https://hub.docker.com/r/bitnami/openldap/
repository: bitnami/openldap
tag: 2.6.3
pullPolicy: Always
pullSecrets: []
# Set the container log level
# Valid log levels: none, error, warning, info (default), debug, trace
logLevel: info
# Settings for enabling TLS with custom certificate
# need a secret with tls.crt, tls.key and ca.crt keys with associated files
# Ref: https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/#create-a-secret
customTLS:
enabled: false
image:
repository: alpine/openssl
tag: latest
secret: "" # The name of a kubernetes.io/tls type secret to use for TLS
## Add additional labels to all resources
extraLabels: {}
service:
annotations: {}
## If service type NodePort, define the value here
#ldapPortNodePort:
#sslLdapPortNodePort:
## List of IP addresses at which the service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
#loadBalancerIP:
#loadBalancerSourceRanges: []
type: ClusterIP
sessionAffinity: None
# Default configuration for openldap as environment variables. These get injected directly in the container.
# Use the env variables from https://hub.docker.com/r/bitnami/openldap/
# Be careful, do not modify the following values unless you know exactly what your are doing
env:
BITNAMI_DEBUG: "true"
LDAP_LOGLEVEL: "256"
LDAP_TLS_ENFORCE: "false"
LDAPTLS_REQCERT: "never"
LDAP_ENABLE_TLS: "yes"
LDAP_CONFIG_ADMIN_ENABLED: "yes"
LDAP_CONFIG_ADMIN_USERNAME: "admin"
LDAP_SKIP_DEFAULT_TREE: "no"
LDAP_EXTRA_SCHEMAS: "cosine,inetorgperson,nis" # 非常重要的配置
# Pod Disruption Budget for Stateful Set
# Disabled by default, to ensure backwards compatibility
pdb:
enabled: false
minAvailable: 1
maxUnavailable: ""
## User list to create (comma separated list) , can't be use with customLdifFiles
## Default set by bitnami image
# users: user01,user02
## User password to create (comma separated list, one for each user)
## Default set by bitnami image
# userPasswords: bitnami1, bitnami2
## Group to create and add list of user above
## Default set by bitnami image
# group: readers
## Existing configmap with custom ldif
# Can't be use with customLdifFiles
# Same format as customLdifFiles
# customLdifCm: my-custom-cm
# Custom openldap configuration files used to override default settings
# DO NOT FORGET to put the Root Organisation object as it won't be created while using customLdifFiles
# customLdifFiles:
# 00-root.ldif: |-
# # Root creation
# dn: dc=example,dc=org
# objectClass: dcObject
# objectClass: organization
# o: Example, Inc
# 01-default-group.ldif: |-
# dn: cn=myGroup,dc=example,dc=org
# cn: myGroup
# gidnumber: 500
# objectclass: posixGroup
# objectclass: top
# 02-default-user.ldif: |-
# dn: cn=Jean Dupond,dc=example,dc=org
# cn: Jean Dupond
# gidnumber: 500
# givenname: Jean
# homedirectory: /home/users/jdupond
# objectclass: inetOrgPerson
# objectclass: posixAccount
# objectclass: top
# sn: Dupond
# uid: jdupond
# uidnumber: 1000
# userpassword: {MD5}KOULhzfBhPTq9k7a9XfCGw==
# Custom openldap ACLs
# If not defined, the following default ACLs are applied:
# customAcls: |-
# dn: olcDatabase={2}mdb,cn=config
# changetype: modify
# replace: olcAccess
# olcAccess: {0}to *
# by dn.exact=gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth manage
# by * break
# olcAccess: {1}to attrs=userPassword,shadowLastChange
# by self write
# by dn="{{ include "global.bindDN" . }}" write
# by anonymous auth by * none
# olcAccess: {2}to *
# by dn="{{ include "global.bindDN" . }}" write
# by self read
# by * none
replication:
enabled: true
# Enter the name of your cluster, defaults to "cluster.local"
clusterName: "cluster.local"
retry: 60
timeout: 1
interval: 00:00:00:10
starttls: "critical"
tls_reqcert: "never"
## Persist data to a persistent volume
persistence:
enabled: true
## database data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
# storageClass: "standard-singlewriter"
# existingClaim: openldap-pvc
accessModes:
- ReadWriteOnce
size: 1Gi
storageClass: "openebs-hostpath"
## @param customLivenessProbe Custom livenessProbe that overrides the default one
##
customLivenessProbe: {}
## @param customReadinessProbe Custom readinessProbe that overrides the default one
##
customReadinessProbe: {}
## @param customStartupProbe Custom startupProbe that overrides the default one
##
customStartupProbe: {}
## OPENLDAP resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
## @param resources.limits The resources limits for the OPENLDAP containers
## @param resources.requests The requested resources for the OPENLDAP containers
##
resources:
limits: {}
requests: {}
## Configure Pods Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param podSecurityContext.enabled Enabled OPENLDAP pods' Security Context
## @param podSecurityContext.fsGroup Set OPENLDAP pod's Security Context fsGroup
##
podSecurityContext:
enabled: true
fsGroup: 1001
## Configure Container Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
## @param containerSecurityContext.enabled Enabled OPENLDAP containers' Security Context
## @param containerSecurityContext.runAsUser Set OPENLDAP containers' Security Context runAsUser
## @param containerSecurityContext.runAsNonRoot Set OPENLDAP containers' Security Context runAsNonRoot
##
containerSecurityContext:
enabled: false
runAsUser: 1001
runAsNonRoot: true
## @param existingConfigmap The name of an existing ConfigMap with your custom configuration for OPENLDAP
##
existingConfigmap:
## @param command Override default container command (useful when using custom images)
##
command: []
## @param args Override default container args (useful when using custom images)
##
args: []
## @param hostAliases OPENLDAP pods host aliases
## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
##
hostAliases: []
## @param podLabels Extra labels for OPENLDAP pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## @param podAnnotations Annotations for OPENLDAP pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## @param podAffinityPreset Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAffinityPreset: ""
## @param podAntiAffinityPreset Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
##
podAntiAffinityPreset: soft
## Node affinity preset
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
##
nodeAffinityPreset:
## @param nodeAffinityPreset.type Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard`
##
type: ""
## @param nodeAffinityPreset.key Node label key to match. Ignored if `affinity` is set
##
key: ""
## @param nodeAffinityPreset.values Node label values to match. Ignored if `affinity` is set
## E.g.
## values:
## - e2e-az1
## - e2e-az2
##
values: []
## @param affinity Affinity for OPENLDAP pods assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## NOTE: `podAffinityPreset`, `podAntiAffinityPreset`, and `nodeAffinityPreset` will be ignored when it's set
##
affinity: {}
## @param nodeSelector Node labels for OPENLDAP pods assignment
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## @param tolerations Tolerations for OPENLDAP pods assignment
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
## @param updateStrategy.type OPENLDAP statefulset strategy type
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
##
updateStrategy:
## StrategyType
## Can be set to RollingUpdate or OnDelete
##
type: RollingUpdate
## @param priorityClassName OPENLDAP pods' priorityClassName
##
priorityClassName: ""
## @param schedulerName Name of the k8s scheduler (other than default) for OPENLDAP pods
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""
## @param lifecycleHooks for the OPENLDAP container(s) to automate configuration before or after startup
##
lifecycleHooks: {}
## @param extraEnvVars Array with extra environment variables to add to OPENLDAP nodes
## e.g:
## extraEnvVars:
## - name: FOO
## value: "bar"
##
extraEnvVars: []
## @param extraEnvVarsCM Name of existing ConfigMap containing extra env vars for OPENLDAP nodes
##
extraEnvVarsCM:
## @param extraEnvVarsSecret Name of existing Secret containing extra env vars for OPENLDAP nodes
##
extraEnvVarsSecret:
## @param extraVolumes Optionally specify extra list of additional volumes for the OPENLDAP pod(s)
##
extraVolumes: []
## @param extraVolumeMounts Optionally specify extra list of additional volumeMounts for the OPENLDAP container(s)
##
extraVolumeMounts: []
## @param sidecars Add additional sidecar containers to the OPENLDAP pod(s)
## e.g:
## sidecars:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## ports:
## - name: portname
## containerPort: 1234
##
sidecars: {}
## @param initContainers Add additional init containers to the OPENLDAP pod(s)
## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
## e.g:
## initContainers:
## - name: your-image-name
## image: your-image
## imagePullPolicy: Always
## command: ['sh', '-c', 'echo "hello world"']
##
initContainers: {}
## ServiceAccount configuration
##
serviceAccount:
## @param serviceAccount.create Specifies whether a ServiceAccount should be created
##
create: true
## @param serviceAccount.name The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the common.names.fullname template
##
name: ""
## @section Init Container Parameters
## 'volumePermissions' init container parameters
## Changes the owner and group of the persistent volume mount point to runAsUser:fsGroup values
## based on the *podSecurityContext/*containerSecurityContext parameters
##
volumePermissions:
## @param volumePermissions.enabled Enable init container that changes the owner/group of the PV mount point to `runAsUser:fsGroup`
##
enabled: false
## Bitnami Shell image
## ref: https://hub.docker.com/r/bitnami/bitnami-shell/tags/
## @param volumePermissions.image.registry Bitnami Shell image registry
## @param volumePermissions.image.repository Bitnami Shell image repository
## @param volumePermissions.image.tag Bitnami Shell image tag (immutable tags are recommended)
## @param volumePermissions.image.pullPolicy Bitnami Shell image pull policy
## @param volumePermissions.image.pullSecrets Bitnami Shell image pull secrets
##
image:
registry: docker.io
repository: bitnami/bitnami-shell
tag: 10-debian-10
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
## e.g:
## pullSecrets:
## - myRegistryKeySecretName
##
pullSecrets: []
## Command to execute during the volumePermission startup
## command: ['sh', '-c', 'echo "hello world"']
command: {}
## Init container's resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
## @param volumePermissions.resources.limits The resources limits for the init container
## @param volumePermissions.resources.requests The requested resources for the init container
##
resources:
limits: {}
requests: {}
## Init container Container Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## @param volumePermissions.containerSecurityContext.runAsUser Set init container's Security Context runAsUser
## NOTE: when runAsUser is set to special value "auto", init container will try to chown the
## data folder to auto-determined user&group, using commands: `id -u`:`id -G | cut -d" " -f2`
## "auto" is especially useful for OpenShift which has scc with dynamic user ids (and 0 is not allowed)
##
containerSecurityContext:
runAsUser: 0
## Configure extra options for liveness, readiness, and startup probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
livenessProbe:
enabled: true
initialDelaySeconds: 20
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 10
readinessProbe:
enabled: true
initialDelaySeconds: 20
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 10
startupProbe:
enabled: true
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 30
## test container details
test:
enabled: false
image:
repository: dduportal/bats
tag: 0.4.0
## ltb-passwd
ltb-passwd:
enabled : true
image:
tag: 5.2.3
ingress:
enabled: true
annotations: {}
path: /
pathType: Prefix
## Ingress Host
hosts:
- "ssl-ldap2.example"
# ldap:
# if you want to restrict search base tree for users instead of complete domain
# searchBase: "ou=....,dc=mydomain,dc=com"
# if you want to use a dedicated bindDN for the search with less permissions instead of cn=admin one
# bindDN: "cn=....,dc=mydomain,dc=com"
# if you want to use a specific key of the credentials secret instead of the default one (LDAP_ADMIN_PASSWORD)
# passKey: LDAP_MY_KEY
## phpldapadmin
phpldapadmin:
enabled: true
ingressClassName: nginx-ldap
ingress:
enabled: true
annotations:
# kubernetes.io/ingress.class: nginx-ldap #通过ingress.ingressClassName: nginx-ldap设置
cert-manager.io/issuer: "letsencrypt-staging-issuer-ldap"
tls:
- hosts:
- phpldapadmin.example.com
secretName: openldap-cert
path: /
pathType: Prefix
## Ingress Host
hosts:
- phpldapadmin.example.com
and the pod of ldap
root@master1:/home/liuzy# kg pod -n ldap
NAME READY STATUS RESTARTS AGE
ingress-nginx-ldap-controller-5fb76d8885-hr7v8 1/1 Running 1 (7m59s ago) 22h
ldap-0 1/1 Running 1 (7m52s ago) 21h
ldap-ltb-passwd-97bcd657d-qxqvw 1/1 Running 1 (7m52s ago) 21h
ldap-phpldapadmin-68dc74bc4b-5zdv8 1/1 Running 1 (7m52s ago) 21h
and the svc of ldap
root@master1:/home/liuzy# kg svc -n ldap
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-ldap-controller LoadBalancer 10.96.3.221 192.168.9.100 80:32186/TCP,443:31097/TCP 9d
ingress-nginx-ldap-controller-admission ClusterIP 10.96.2.229 <none> 443/TCP 9d
ldap ClusterIP 10.96.3.154 <none> 389/TCP,636/TCP 21h
ldap-headless ClusterIP None <none> 389/TCP 21h
ldap-ltb-passwd ClusterIP 10.96.0.183 <none> 80/TCP 21h
ldap-phpldapadmin ClusterIP 10.96.1.2 <none> 80/TCP 21h
and the describe of ldap-phpldapadmin svc
root@master1:/home/liuzy# kd svc ldap-phpldapadmin -n ldap
Name: ldap-phpldapadmin
Namespace: ldap
Labels: app=phpldapadmin
app.kubernetes.io/managed-by=Helm
chart=phpldapadmin-0.1.2
heritage=Helm
release=ldap
Annotations: meta.helm.sh/release-name: ldap
meta.helm.sh/release-namespace: ldap
Selector: app=phpldapadmin,release=ldap
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.1.2
IPs: 10.96.1.2
Port: http 80/TCP
TargetPort: http/TCP
Endpoints: 100.66.209.247:80
Session Affinity: None
Events: <none>
ingress-nginx-ldap describe info,
Name: ingress-nginx-ldap-controller
Namespace: ldap
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx-ldap
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.6.4
helm.sh/chart=ingress-nginx-4.5.2
Annotations: meta.helm.sh/release-name: ingress-nginx-ldap
meta.helm.sh/release-namespace: ldap
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-ldap,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.3.221
IPs: 10.96.3.221
LoadBalancer Ingress: 192.168.9.100
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32186/TCP
Endpoints: 100.108.11.237:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31097/TCP
Endpoints: 100.108.11.237:443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal nodeAssigned 19m (x3 over 20m) metallb-speaker announcing from node "master1" with protocol "bgp"
Normal nodeAssigned 19m (x3 over 20m) metallb-speaker announcing from node "master1" with protocol "layer2"
Normal nodeAssigned 19m (x3 over 20m) metallb-speaker announcing from node "node1" with protocol "bgp"
Normal nodeAssigned 19m metallb-speaker announcing from node "node2" with protocol "bgp"
Name: ingress-nginx-ldap-controller-admission
Namespace: ldap
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx-ldap
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.6.4
helm.sh/chart=ingress-nginx-4.5.2
Annotations: meta.helm.sh/release-name: ingress-nginx-ldap
meta.helm.sh/release-namespace: ldap
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-ldap,app.kubernetes.io/name=ingress-nginx
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.2.229
IPs: 10.96.2.229
Port: https-webhook 443/TCP
TargetPort: webhook/TCP
Endpoints: 100.108.11.237:8443
Session Affinity: None
Events: <none>
Hi @jp-gouin , I re-installed k8s , it's clean , then i install the openldap ,then showed same issue, Is it related to the latest chart version?
Hi @liuziyuan I double check and it’s working on my side .
you can check the CI in .github/ci.yaml
In your example , shouldn’t you use https://phpldapadmin.example.com/:31097
? Or are 100.108.11.237
advertise on your network ?
also can you check who is replying 404
is it you nginx
?
@jp-gouin I had the same issue by using ingress-nginx. But it works fine with traefik. To make ingress-nginx work, I've configured my values.yaml with:
...
phpldapadmin:
enabled: true
ingressClassName: nginx
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx. # for ingress-nginx I also need this annotation, otherwise ingress cant find the openldap service... But traefik can do it automatically.
cert-manager.io/cluster-issuer: letsencrypt
tls:
- hosts:
- phpadmin.test.io
secretName: phpadmin-tls
path: /
pathType: Prefix
hosts:
- phpadmin.test.io
...
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
@jp-gouin @xiabai84 ,Hi all, Long time not reponsed this issue ,I know what is error for this issue This exception was caused by the Ingress-nginx Config
## phpldapadmin
phpldapadmin:
enabled: true
ingressClassName: nginx-ldap
ingress:
enabled: true
annotations:
# kubernetes.io/ingress.class: nginx-ldap #通过ingress.ingressClassName: nginx-ldap设置
cert-manager.io/issuer: "letsencrypt-staging-issuer-ldap"
tls:
- hosts:
- phpldapadmin.example.com
secretName: openldap-cert
path: /
pathType: Prefix
## Ingress Host
hosts:
- phpldapadmin.example.com
you can see ingressClassName: nginx-ldap
line ,it's wrong config,
the right config is open my comment line # kubernetes.io/ingress.class: nginx-ldap #通过ingress.ingressClassName: nginx-ldap设置
. and block this ingressClassName: nginx-ldap
comment
Thank you every one.
I just deleted ldap and removed pvc, and re-install again by helm ,then the access https://phpldapadmin.example.com/ page ,but showed 404 Not Found. I don't change any properties in values.yaml file .
l use
kubectl describe ingress -n ldap
, the ingress showed, it looks right.So, I don't know what wrong with ldap. pls help me , How to check this issue, thank you