Open davidfrickert opened 2 weeks ago
First of all thank you for your great contribution !
can you add a test in .github/workflows/ci-others.yaml
to test that in read only mode you can read but not write ? Let me know if you need help
First of all thank you for your great contribution !
can you add a test in
.github/workflows/ci-others.yaml
to test that in read only mode you can read but not write ? Let me know if you need help
No worries. Great suggestion, will do.
I tried it and I found out that if you are using olcReadOnly: TRUE
you are locking the base and the replication can't work.
Did you have the same issue ?
If I want to still have the replication working, I think it's best to use a strict access control list such as :
readonly.ldif: |
dn: olcDatabase={2}mdb,cn=config
changetype: modify
add: olcAccess
olcAccess: to *
by dn.exact="{{ include "global.bindDN" . }}" write
by * read
Using this I ensure that only admin user is allowed to write , others can only read
I tried it and I found out that if you are using
olcReadOnly: TRUE
you are locking the base and the replication can't work. Did you have the same issue ?If I want to still have the replication working, I think it's best to use a strict access control list such as :
readonly.ldif: | dn: olcDatabase={2}mdb,cn=config changetype: modify add: olcAccess olcAccess: to * by dn.exact="{{ include "global.bindDN" . }}" write by * read
Using this I ensure that only admin user is allowed to write , others can only read
Replication is working fine in my case but I will check if I forgot to commit something. From docs that I read olcReadOnly should not break replication
If I leave the default
readonly.ldif: |
dn: olcDatabase={2}mdb,cn=config
olcReadonly: TRUE
I can see the following logs :
openldap-readonly-0 openldap-stack-ha 6679713c.015537dd 0x7fe06d9896c0 conn=1011 op=1 ADD dn="olcDatabase={2}mdb,cn=config"
openldap-readonly-0 openldap-stack-ha 6679713c.0156fc39 0x7fe06d9896c0 is_entry_objectclass("olcDatabase={2}mdb,cn=config", "2.5.17.0") no objectClass attribute
openldap-readonly-0 openldap-stack-ha 6679713c.0157cab7 0x7fe06d9896c0 No objectClass for entry (olcDatabase={2}mdb,cn=config)
openldap-readonly-0 openldap-stack-ha 6679713c.0158bbfe 0x7fe06d9896c0 conn=1011 op=1 RESULT tag=105 err=65 qtime=0.000015 etime=0.000271 text=no objectClass attribute
openldap-readonly-0 openldap-stack-ha ldap_add: Object class violation (65)
openldap-readonly-0 openldap-stack-ha additional info: no objectClass attribute
Openldap complain about the ldif
not containing the objectClass attribute
If I change the ldif
to
readonly.ldif: |
dn: olcDatabase={2}mdb,cn=config
changeType: modify
replace: olcReadOnly
olcReadOnly: TRUE
It's correctly added but the replication seems broken :
nldap-readonly-0 openldap-stack-ha 66797204.02542172 0x7f07c23d66c0 conn=1013 op=1 ADD dn="dc=example,dc=org"
openldap-readonly-0 openldap-stack-ha 66797204.0254e49d 0x7f07c23d66c0 conn=1013 op=1 RESULT tag=105 err=53 qtime=0.000009 etime=0.000075 text=operation restricted
openldap-readonly-0 openldap-stack-ha ldap_add: Server is unwilling to perform (53)
openldap-readonly-0 openldap-stack-ha additional info: operation restricted
openldap-readonly-0 openldap-stack-ha 66797204.0256a1b9 0x7f07c2bd76c0 conn=1013 op=2 UNBIND
openldap-readonly-0 openldap-stack-ha adding new entry "dc=example,dc=org"
What is your output when you run
LDAPTLS_REQCERT=never ldapsearch -o nettimeout=20 -x -D 'cn=admin,dc=example,dc=org' -w Not@SecurePassw0rd -H ldaps://localhost:8636 -b 'dc=example,dc=org'
(given ldaps://localhost:8636 is your read only server) ?
Sure, here:
dfrickert@VD011936:~$ sudo LDAPTLS_REQCERT=hard LDAPTLS_CACERT=/etc/ssl/certs/TestLDAP_CA.pem ldapsearch -x -H ldaps://127.0.0.1.sslip.io:636 -D 'cn=admin,dc=example,dc=com' -W -b "dc=example,dc=com"
Enter LDAP Password:
# extended LDIF
#
# LDAPv3
# base <dc=example,dc=com> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# example.com
dn: dc=example,dc=com
objectClass: dcObject
objectClass: organization
dc: example
o: example
# users, example.com
dn: ou=users,dc=example,dc=com
objectClass: organizationalUnit
ou: users
# user01, users, example.com
dn: cn=user01,ou=users,dc=example,dc=com
cn: User1
cn: user01
sn: Bar1
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
userPassword:: Yml0bmFtaTE=
uid: user01
uidNumber: 1000
gidNumber: 1000
homeDirectory: /home/user01
# user02, users, example.com
dn: cn=user02,ou=users,dc=example,dc=com
cn: User2
cn: user02
sn: Bar2
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: shadowAccount
userPassword:: Yml0bmFtaTI=
uid: user02
uidNumber: 1001
gidNumber: 1001
homeDirectory: /home/user02
# readers, users, example.com
dn: cn=readers,ou=users,dc=example,dc=com
cn: readers
objectClass: groupOfNames
member: cn=user01,ou=users,dc=example,dc=com
member: cn=user02,ou=users,dc=example,dc=com
# testuser, users, example.com
dn: uid=testuser,ou=users,dc=example,dc=com
uid: testuser
objectClass: inetOrgPerson
objectClass: organizationalPerson
sn:: IA==
cn:: IA==
# search result
search: 2
result: 0 Success
# numResponses: 7
# numEntries: 6
This query is going to the openldap-readonly
service and logs of readonly pod confirm that:
66797c87.0c7bad70 0x7fa0bedeb6c0 conn=61360 op=1 SRCH base="dc=example,dc=com" scope=2 deref=0 filter="(objectClass=*)"
66797c87.0c916931 0x7fa0bedeb6c0 conn=61360 op=1 SEARCH RESULT tag=101 err=0 qtime=0.000084 etime=0.001529 nentries=6 text=
I also have no replication errors, have created multiple users (such as testuser in above query).
kubectl get svc -n keycloak-iam
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgresql-ha-postgresql-headless ClusterIP None <none> 5432/TCP 4d4h
postgresql-ha-pgpool ClusterIP 10.43.159.24 <none> 5432/TCP 4d4h
postgresql-ha-postgresql ClusterIP 10.43.107.84 <none> 5432/TCP 4d4h
keycloak-operator ClusterIP 10.43.73.14 <none> 80/TCP 4d3h
keycloak-idp-discovery ClusterIP None <none> 7800/TCP 4d3h
keycloak-idp-service ClusterIP 10.43.74.135 <none> 8080/TCP,8443/TCP 4d3h
openldap-headless-readonly ClusterIP None <none> 389/TCP,636/TCP 3d1h
openldap-headless ClusterIP None <none> 389/TCP,636/TCP 3d1h
openldap ClusterIP 10.43.169.243 <none> 389/TCP,636/TCP 3d1h
openldap-phpldapadmin ClusterIP 10.43.176.98 <none> 80/TCP 3d1h
openldap-readonly LoadBalancer 10.43.242.91 127.0.0.1 636:30144/TCP 3d1h
openldap-readonly-0 openldap-stack-ha additional info: no objectClass attribute
i had this issue as well, in LDAP_EXTRA_SCHEMAS
, readonly
needs to be the last one in this env var, which is what the chart should currently do
i will try to write a github test to see if it leads to your issue as well, but my values are as follows:
customAcls: |-
dn: olcDatabase={2}mdb,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to *
by dn.exact=gidNumber=0+uidNumber=1001,cn=peercred,cn=external,cn=auth manage
by * break
olcAccess: {1}to attrs=userPassword,shadowLastChange
by self write
by anonymous auth by * none
olcAccess: {2}to *
by self read
by * search
env:
BITNAMI_DEBUG: "true"
LDAP_ALLOW_ANON_BINDING: "no"
LDAP_ENABLE_TLS: "yes"
LDAP_LOGLEVEL: "256"
LDAP_SKIP_DEFAULT_TREE: "no"
LDAP_TLS_ENFORCE: "false"
LDAPTLS_REQCERT: never
global:
existingSecret: openldap-secrets
imageRegistry: VD011936.example.com:5000
ldapDomain: example.com
image:
repository: <private-repo>/jpgouin/openldap
tag: 2.6.7-fix
initSchema:
image:
repository: <private-repo>/debian
tag: latest
initTLSSecret:
image:
repository: <private-repo>/alpine/openssl
tag: latest
secret: openldap-tls-secret
tls_enabled: true
ltb-passwd:
enabled: false
persistence:
accessModes:
- ReadWriteOnce
enabled: true
size: 1Gi
phpldapadmin:
enabled: true
image:
repository: <private-repo>/osixia/phpldapadmin
tag: 0.9.0
ingress:
enabled: true
hosts:
- phpldapadmin.127.0.0.1.sslip.io
path: /
pathType: Prefix
tls:
- hosts:
- phpldapadmin.127.0.0.1.sslip.io
secretName: phpldapadmin-tls-secret
replicaCount: 3
readOnlyReplicaCount: 1
replication:
clusterName: cluster.local
enabled: true
interval: "00:00:00:10"
retry: 60
starttls: critical
timeout: 1
tls_reqcert: never
Also, might or might not be relevant, I noticed that there is an issue on ACLs in cluster mode, so i do run:
kubectl exec -n {{ iam_namespace }} openldap-0 -- bash -c "/opt/bitnami/openldap/bin/ldapmodify -Y EXTERNAL -H ldapi:/// -f /opt/bitnami/openldap/etc/schema/acls.ldif
manually, once the main statefulset cluster is healthy.
If you can give me permissions to run workflows ad-hoc that would be nice! @jp-gouin (not fully sure how that works)
@davidfrickert manual approval is required for first time contributor
Anyway I recommend you use act to test your workflow locally before commit . This will save you quite some time
I tried it and I found out that if you are using
olcReadOnly: TRUE
you are locking the base and the replication can't work. Did you have the same issue ?If I want to still have the replication working, I think it's best to use a strict access control list such as :
readonly.ldif: | dn: olcDatabase={2}mdb,cn=config changetype: modify add: olcAccess olcAccess: to * by dn.exact="{{ include "global.bindDN" . }}" write by * read
Using this I ensure that only admin user is allowed to write , others can only read
just checked this and you're right, the pod stays healthy after crashing once but the read only setting is not applied, am checking on alternatives will try your suggestion
EDIT:
I don't think the olcAccess way works as this gets replicated onto the rest of the cluster
okay, RO replica with ACL actually seems to work. But it seems it can't be fully read-only as the admin account still can write onto it. Also, if any ACLs are applied to main cluster, they are sync'd to the replica and read-only capabilities are lost, which is unfortunate.
another update, i finally found out how to make it truly read only. the read only replica cannot have the "olcMirrorMode" set to TRUE. and it needs to have olcUpdateref set. Then it will reject all write requests. will modify PR
any ideas on how to stop olcMirrorMode/olcMultiProvider to be replicated from master replicas to read only replica @jp-gouin ? what i have right now locally to make it work is run manually (olcMultiProvider is new name of olcMirrorMode):
dn: olcDatabase={2}mdb,cn=config
changetype: modify
delete: olcMultiProvider
dn: olcDatabase={0}config,cn=config
changetype: modify
delete: olcMultiProvider
but i'd prefer if this attribute is not replicated so i dont have to run manual stuff. experimented adding exattrs=olcMirrorMode,olcMultiProvider to olcSyncrepl but no luck
How about changing the type=refreshAndPersist
to type=refreshOnly
for read only replica ?
Can't try it now , you can give it a shot
How about changing the
type=refreshAndPersist
totype=refreshOnly
for read only replica ? Can't try it now , you can give it a shot
will try, thanks
How about changing the
type=refreshAndPersist
totype=refreshOnly
for read only replica ? Can't try it now , you can give it a shot
I think this might help a bit in another issue that i was facing that was deleting olcMirrorMode from "read only replica" would also delete it from main cluster. But unfortunately it does not stop this attribute from being sync'd from main cluster to RO replica, thus still requiring the manual delete of the attribute from the replica's databases.
latest commits should make tests work for readonly (tested locally) although i now have changed default repl to refreshOnly so should perhaps work on a new commit to split that up such that only the readonly replica works on refreshOnly mode. still don't like that a manual exec is needed to remove mirror mode but atm can't see a cleaner solution
edit: well, putting normal nodes in refreshAndPersist and read only in refreshOnly doesn't work as that is also sync'd so eventually replica also goes into refreshAndPersist mode
So I got a pretty decent results with the following changes :
The master
cluster is not aware of the readonly
nodes, they keep doing the replication as expected.
readonly
nodes are aware of the master
cluster and do the base replication. This is done by applying only the brep.ldif
file to all readonly
nodes. And no change on the _helpers.tpl
from the main branch.
The cn=config
is not replicated from master
to readonly
, so additional schemas should be applied on the first run , or applied manually to others readonly
.
Finally, readonly
nodes only allows write from the admin
and deny all using the acl
of the configmap-readonly.yaml
.
configmap-readonly.yaml
:{{- if (gt (.Values.readOnlyReplicaCount | int) 0) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "openldap.fullname" . }}-readonly
labels:
app: {{ template "openldap.name" . }}
chart: {{ template "openldap.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
data:
readonly.ldif: |
dn: olcDatabase={2}mdb,cn=config
changetype: modify
add: olcAccess
olcAccess: to *
by dn.exact="{{ include "global.bindDN" . }}" write
by * read
{{- end }}
_helpers.tpl
from the main branch :
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "openldap.name" -}}
{{- default .Release.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "openldap.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Release.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "openldap.chart" -}}
{{- printf "%s-%s" .Release.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create the name of the service account to use
*/}}
{{- define "openldap.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
{{ default (printf "%s-foo" (include "common.names.fullname" .)) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
{{/*
Generate chart secret name
*/}}
{{- define "openldap.secretName" -}}
{{ default (include "openldap.fullname" .) .Values.global.existingSecret }}
{{- end -}}
{{/*
Generate olcServerID list
*/}}
{{- define "olcServerIDs" }}
{{- $name := (include "openldap.fullname" .) }}
{{- $namespace := .Release.Namespace }}
{{- $cluster := .Values.replication.clusterName }}
{{- $nodeCount := .Values.replicaCount | int }}
{{- range $index0 := until $nodeCount }}
{{- $index1 := $index0 | add1 }}
olcServerID: {{ $index1 }} ldap://{{ $name }}-{{ $index0 }}.{{ $name }}-headless.{{ $namespace }}.svc.{{ $cluster }}:1389
{{- end -}}
{{- end -}}
{{/*
Generate olcSyncRepl list
*/}}
{{- define "olcSyncRepls" -}}
{{- $name := (include "openldap.fullname" .) }}
{{- $namespace := .Release.Namespace }}
{{- $bindDNUser := .Values.global.adminUser }}
{{- $cluster := .Values.replication.clusterName }}
{{- $configPassword := ternary .Values.global.configPassword "%%CONFIG_PASSWORD%%" (empty .Values.global.existingSecret) }}
{{- $retry := .Values.replication.retry }}
{{- $timeout := .Values.replication.timeout }}
{{- $starttls := .Values.replication.starttls }}
{{- $tls_reqcert := .Values.replication.tls_reqcert }}
{{- $nodeCount := .Values.replicaCount | int }}
{{- range $index0 := until $nodeCount }}
{{- $index1 := $index0 | add1 }}
olcSyncRepl: rid=00{{ $index1 }} provider=ldap://{{ $name }}-{{ $index0 }}.{{ $name }}-headless.{{ $namespace }}.svc.{{ $cluster }}:1389 binddn="cn={{ $bindDNUser }},cn=config" bindmethod=simple credentials={{ $configPassword }} searchbase="cn=config" type=refreshAndPersist retry="{{ $retry }} +" timeout={{ $timeout }} starttls={{ $starttls }} tls_reqcert={{ $tls_reqcert }}
{{- end -}}
{{- end -}}
{{/*
Generate olcSyncRepl list
*/}}
{{- define "olcSyncRepls2" -}}
{{- $name := (include "openldap.fullname" .) }}
{{- $domain := (include "global.baseDomain" .) }}
{{- $bindDNUser := .Values.global.adminUser }}
{{- $namespace := .Release.Namespace }}
{{- $cluster := .Values.replication.clusterName }}
{{- $adminPassword := ternary .Values.global.adminPassword "%%ADMIN_PASSWORD%%" (empty .Values.global.existingSecret) }}
{{- $retry := .Values.replication.retry }}
{{- $timeout := .Values.replication.timeout }}
{{- $starttls := .Values.replication.starttls }}
{{- $tls_reqcert := .Values.replication.tls_reqcert }}
{{- $interval := .Values.replication.interval }}
{{- $nodeCount := .Values.replicaCount | int }}
{{- range $index0 := until $nodeCount }}
{{- $index1 := $index0 | add1 }}
olcSyncrepl:
rid=10{{ $index1 }}
provider=ldap://{{ $name }}-{{ $index0 }}.{{ $name }}-headless.{{ $namespace }}.svc.{{ $cluster }}:1389
binddn={{ printf "cn=%s,%s" $bindDNUser $domain }}
bindmethod=simple
credentials={{ $adminPassword }}
searchbase={{ $domain }}
type=refreshAndPersist
interval={{ $interval }}
network-timeout=0
retry="{{ $retry }} +"
timeout={{ $timeout }}
starttls={{ $starttls }}
tls_reqcert={{ $tls_reqcert }}
{{- end -}}
{{- end -}}
{{/*
Renders a value that contains template.
Usage:
{{ include "openldap.tplValue" ( dict "value" .Values.path.to.the.Value "context" $) }}
*/}}
{{- define "openldap.tplValue" -}}
{{- if typeIs "string" .value }}
{{- tpl .value .context }}
{{- else }}
{{- tpl (.value | toYaml) .context }}
{{- end }}
{{- end -}}
{{/*
Return the proper Openldap image name
*/}}
{{- define "openldap.image" -}}
{{- include "common.images.image" (dict "imageRoot" .Values.image "global" .Values.global) -}}
{{- end -}}
{{/*
Return the proper Docker Image Registry Secret Names
*/}}
{{- define "openldap.imagePullSecrets" -}}
{{ include "common.images.pullSecrets" (dict "images" (list .Values.image ) "global" .Values.global) }}
{{- end -}}
{{/*
Return the proper Openldap init container image name
*/}}
{{- define "openldap.initTLSSecretImage" -}}
{{- include "common.images.image" (dict "imageRoot" .Values.initTLSSecret.image "global" .Values.global) -}}
{{- end -}}
{{/*
Return the proper Openldap init container image name
*/}}
{{- define "openldap.initSchemaImage" -}}
{{- include "common.images.image" (dict "imageRoot" .Values.initSchema.image "global" .Values.global) -}}
{{- end -}}
{{/*
Return the proper Openldap volume permissions init container image name
*/}}
{{- define "openldap.volumePermissionsImage" -}}
{{- include "common.images.image" (dict "imageRoot" .Values.volumePermissions.image "global" .Values.global) -}}
{{- end -}}
{{/*
Return the list of builtin schema files to mount
Cannot return list => return string comma separated
*/}}
{{- define "openldap.builtinSchemaFiles" -}}
{{- $schemas := "" -}}
{{- if .Values.replication.enabled -}}
{{- $schemas = "syncprov,serverid,csyncprov,rep,bsyncprov,brep,acls" -}}
{{- else -}}
{{- $schemas = "acls" -}}
{{- end -}}
{{- print $schemas -}}
{{- end -}}
{{/*
Return the list of custom schema files to use
Cannot return list => return string comma separated
*/}}
{{- define "openldap.customSchemaFiles" -}}
{{- $schemas := "" -}}
{{- $schemas := ((join "," (.Values.customSchemaFiles | keys | sortAlpha)) | replace ".ldif" "") -}}
{{- print $schemas -}}
{{- end -}}
{{/*
Return the list of all schema files to use
Cannot return list => return string comma separated
*/}}
{{- define "openldap.schemaFiles" -}}
{{- $schemas := (include "openldap.builtinSchemaFiles" .) -}}
{{- $custom_schemas := (include "openldap.customSchemaFiles" .) -}}
{{- if gt (len $custom_schemas) 0 -}}
{{- $schemas = print $schemas "," $custom_schemas -}}
{{- end -}}
{{- print $schemas -}}
{{- end -}}
{{/*
Return the proper base domain
*/}}
{{- define "global.baseDomain" -}}
{{- $bd := include "tmp.baseDomain" .}}
{{- printf "%s" $bd | trimSuffix "," -}}
{{- end }}
{{/*
tmp method to iterate through the ldapDomain
*/}}
{{- define "tmp.baseDomain" -}}
{{- if regexMatch ".*=.*" .Values.global.ldapDomain }}
{{- printf "%s" .Values.global.ldapDomain }}
{{- else }}
{{- $parts := split "." .Values.global.ldapDomain }}
{{- range $index, $part := $parts }}
{{- $index1 := $index | add 1 -}}
dc={{ $part }},
{{- end}}
{{- end -}}
{{- end -}}
{{/*
Return the server name
*/}}
{{- define "global.server" -}}
{{- printf "%s.%s" .Release.Name .Release.Namespace -}}
{{- end -}}
{{/*
Return the bdmin indDN
*/}}
{{- define "global.bindDN" -}}
{{- printf "cn=%s,%s" .Values.global.adminUser (include "global.baseDomain" .) -}}
{{- end -}}
{{/*
Return the ldaps port
*/}}
{{- define "global.ldapsPort" -}}
{{- printf "%d" .Values.global.sslLdapPort -}}
{{- end -}}
{{/*
Return the ldap port
*/}}
{{- define "global.ldapPort" -}}
{{- printf "%d" .Values.global.ldapPort -}}
{{- end -}}
statefulset-readonly.yaml
:{{- if (gt (.Values.readOnlyReplicaCount | int) 0) }}
apiVersion: {{ include "common.capabilities.statefulset.apiVersion" . }}
kind: StatefulSet
metadata:
name: {{ template "openldap.fullname" . }}-readonly
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: {{ template "openldap.fullname" . }}-readonly
chart: {{ template "openldap.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.commonLabels }}
{{- include "common.tplvalues.render" ( dict "value" .Values.commonLabels "context" $ ) | nindent 4 }}
{{- end }}
{{- if .Values.commonAnnotations }}
annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }}
{{- end }}
spec:
replicas: {{ .Values.readOnlyReplicaCount }}
selector:
matchLabels: {{ include "common.labels.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: {{ template "openldap.fullname" . }}-readonly
serviceName: {{ template "openldap.fullname" . }}-headless-readonly
{{- if .Values.updateStrategy }}
updateStrategy:
{{ toYaml .Values.updateStrategy | nindent 4 }}
{{- end }}
template:
metadata:
annotations:
{{- if .Values.podAnnotations }}
{{- include "common.tplvalues.render" (dict "value" .Values.podAnnotations "context" $) | nindent 8 }}
{{- end }}
checksum/configmap-env: {{ include (print $.Template.BasePath "/configmap-env.yaml") . | sha256sum }}
{{- if .Values.customLdifFiles}}
checksum/configmap-customldif: {{ include (print $.Template.BasePath "/configmap-customldif.yaml") . | sha256sum }}
{{- end }}
labels: {{- include "common.labels.standard" . | nindent 8 }}
app.kubernetes.io/component: {{ template "openldap.fullname" . }}-readonly
release: {{ .Release.Name }}
{{- if .Values.podLabels }}
{{- include "common.tplvalues.render" (dict "value" .Values.podLabels "context" $) | nindent 8 }}
{{- end }}
spec:
initContainers:
- name: init-schema
image: {{ include "openldap.initSchemaImage" . }}
imagePullPolicy: {{ .Values.initSchema.image.pullPolicy | quote }}
command:
- sh
- -c
- |
cp -p -f /cm-schemas-acls/brep.ldif /custom_config/
echo "let the replication takes care of everything :)"
{{- if .Values.global.existingSecret }}
sed -i -e "s/%%CONFIG_PASSWORD%%/${LDAP_CONFIG_ADMIN_PASSWORD}/g" /custom_config/*
sed -i -e "s/%%ADMIN_PASSWORD%%/${LDAP_ADMIN_PASSWORD}/g" /custom_config/*
{{- end }}
{{- if .Values.containerSecurityContext.enabled }}
securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 12 }}
{{- end }}
{{- if .Values.initTLSSecret.resources }}
resources: {{- toYaml .Values.initTLSSecret.resources | nindent 12 }}
{{- end }}
volumeMounts:
{{- if .Values.customSchemaFiles }}
{{- range $file := (include "openldap.customSchemaFiles" . | split ",") }}
- name: cm-custom-schema-files
mountPath: /cm-schemas/{{ $file }}.ldif
subPath: {{ $file }}.ldif
{{- end }}
- name: custom-schema-files
mountPath: /custom-schemas/
{{- end }}
{{- if or (.Values.customLdifFiles) (.Values.customLdifCm) }}
- name: cm-custom-ldif-files
mountPath: /cm-ldifs/
- name: custom-ldif-files
mountPath: /custom-ldifs/
{{- end }}
- name: cm-replication-acls
mountPath: /cm-schemas-acls
- name: replication-acls
mountPath: /custom_config
{{- if .Values.global.existingSecret }}
envFrom:
- secretRef:
name: {{ template "openldap.secretName" . }}
{{- end }}
{{- if .Values.initContainers }}
{{- include "common.tplvalues.render" (dict "value" .Values.initContainers "context" $) | nindent 8 }}
{{- end }}
- name: init-tls-secret
image: {{ include "openldap.initTLSSecretImage" . }}
imagePullPolicy: {{ .Values.initTLSSecret.image.pullPolicy | quote }}
command:
- sh
- -c
- |
{{- if and .Values.initTLSSecret.tls_enabled .Values.initTLSSecret.secret }}
{{- else }}
openssl req -x509 -newkey rsa:4096 -nodes -subj '/CN={{ .Values.global.ldapDomain }}' -keyout /tmp-certs/tls.key -out /tmp-certs/tls.crt -days 365
chmod 777 /tmp-certs/*
{{- end }}
cp -Lr /tmp-certs/* /certs
[ -e /certs/ca.crt ] || cp -a /certs/tls.crt /certs/ca.crt
{{- if .Values.containerSecurityContext.enabled }}
securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 12 }}
{{- end }}
{{- if .Values.initTLSSecret.resources }}
resources: {{- toYaml .Values.initTLSSecret.resources | nindent 12 }}
{{- end }}
volumeMounts:
- name: certs
mountPath: "/certs"
- name: secret-certs
mountPath: "/tmp-certs"
{{- if .Values.volumePermissions.enabled }}
- name: volume-permissions
image: {{ include "openldap.volumePermissionsImage" . }}
imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }}
command: {{- include "common.tplvalues.render" (dict "value" .Values.volumePermissions.image.command "context" $) | nindent 12 }}
{{- if .Values.containerSecurityContext.enabled }}
securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 12 }}
{{- end }}
{{- if .Values.volumePermissions.resources }}
resources: {{- toYaml .Values.volumePermissions.resources | nindent 12 }}
{{- end }}
volumeMounts:
- mountPath: /bitnami
name: data
{{- end }}
serviceAccountName: {{ template "openldap.serviceAccountName" . }}
{{- include "openldap.imagePullSecrets" . | nindent 6 }}
{{- if .Values.hostAliases }}
hostAliases: {{- include "common.tplvalues.render" (dict "value" .Values.hostAliases "context" $) | nindent 8 }}
{{- end }}
{{- if .Values.affinity }}
affinity: {{- include "common.tplvalues.render" ( dict "value" .Values.affinity "context" $) | nindent 8 }}
{{- else }}
affinity:
podAffinity: {{- include "common.affinities.pods" (dict "type" .Values.podAffinityPreset "component" "openldap-readonly" "context" $) | nindent 10 }}
podAntiAffinity: {{- include "common.affinities.pods" (dict "type" .Values.podAntiAffinityPreset "component" "openldap-readonly" "context" $) | nindent 10 }}
nodeAffinity: {{- include "common.affinities.nodes" (dict "type" .Values.nodeAffinityPreset.type "key" .Values.nodeAffinityPreset.key "values" .Values.nodeAffinityPreset.values) | nindent 10 }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector: {{- include "common.tplvalues.render" ( dict "value" .Values.nodeSelector "context" $) | nindent 8 }}
{{- end }}
{{- if .Values.schedulerName }}
schedulerName: {{- .Values.schedulerName | quote }}
{{- end }}
{{- if .Values.podSecurityContext.enabled }}
securityContext: {{- omit .Values.podSecurityContext "enabled" | toYaml | nindent 8 }}
{{- end }}
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName | quote }}
{{- end }}
{{- if .Values.tolerations }}
tolerations: {{- include "common.tplvalues.render" (dict "value" .Values.tolerations "context" $) | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: {{ include "openldap.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.containerSecurityContext.enabled }}
securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 12 }}
{{- end }}
env:
- name: LDAP_EXTRA_SCHEMAS
value: {{ print "cosine,inetorgperson,nis,brep,readonly" }}
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
{{- if .Values.extraEnvVars }}
{{- include "common.tplvalues.render" (dict "value" .Values.extraEnvVars "context" $) | nindent 12 }}
{{- end }}
envFrom:
{{- if .Values.extraEnvVarsCM }}
- configMapRef:
name: {{ include "common.tplvalues.render" (dict "value" .Values.extraEnvVarsCM "context" $) }}
{{- end }}
- configMapRef:
name: {{ template "openldap.fullname" . }}-env
{{- if .Values.extraEnvVarsSecret }}
- secretRef:
name: {{ include "common.tplvalues.render" (dict "value" .Values.extraEnvVarsSecret "context" $) }}
{{- end }}
- secretRef:
name: {{ template "openldap.secretName" . }}
{{- if .Values.resources }}
resources: {{- toYaml .Values.resources | nindent 12 }}
{{- end }}
ports:
- name: ldap-port
containerPort: 1389
- name: ssl-ldap-port
containerPort: 1636
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
tcpSocket:
port: ldap-port
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe:
tcpSocket:
port: ldap-port
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
{{- end }}
{{- if .Values.startupProbe.enabled }}
startupProbe:
tcpSocket:
port: ldap-port
initialDelaySeconds: {{ .Values.startupProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.startupProbe.periodSeconds }}
timeoutSeconds: {{ .Values.startupProbe.timeoutSeconds }}
successThreshold: {{ .Values.startupProbe.successThreshold }}
failureThreshold: {{ .Values.startupProbe.failureThreshold }}
{{- else if .Values.customStartupProbe }}
startupProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customStartupProbe "context" $) | nindent 12 }}
{{- end }}
{{- if .Values.lifecycleHooks }}
lifecycle: {{- include "common.tplvalues.render" (dict "value" .Values.lifecycleHooks "context" $) | nindent 12 }}
{{- end }}
volumeMounts:
- name: data
mountPath: /bitnami/openldap/
- name: certs
mountPath: /opt/bitnami/openldap/certs
- name: replication-acls
mountPath: /opt/bitnami/openldap/etc/schema/brep.ldif
subPath: brep.ldif
- name: readonly-ldif
mountPath: /opt/bitnami/openldap/etc/schema/readonly.ldif
subPath: readonly.ldif
{{- if .Values.customSchemaFiles}}
{{- range $file := (include "openldap.customSchemaFiles" . | split ",") }}
- name: custom-schema-files
mountPath: /opt/bitnami/openldap/etc/schema/{{ $file }}.ldif
subPath: {{ $file }}.ldif
{{- end }}
{{- end }}
{{- if or (.Values.customLdifFiles) (.Values.customLdifCm) }}
- name: custom-ldif-files
mountPath: /ldifs/
{{- end }}
{{- range .Values.customFileSets }}
{{- $fs := . }}
{{- range .files }}
- name: {{ $fs.name }}
mountPath: {{ $fs.targetPath }}/{{ .filename }}
subPath: {{ .filename }}
{{- end }}
{{- end }}
{{- if .Values.extraVolumeMounts }}
{{- include "common.tplvalues.render" (dict "value" .Values.extraVolumeMounts "context" $) | nindent 12 }}
{{- end }}
{{- if .Values.sidecars }}
{{- include "common.tplvalues.render" ( dict "value" .Values.sidecars "context" $) | nindent 8 }}
{{- end }}
volumes:
{{- if .Values.persistence.enabled }}
{{- if .Values.persistence.existingClaim }}
- name: data
persistentVolumeClaim:
claimName: {{ .Values.persistence.existingClaim }}
{{- end }}
{{- end }}
- name: cm-replication-acls
configMap:
name: {{ template "openldap.fullname" . }}-replication-acls
- name: replication-acls
emptyDir:
medium: Memory
{{- if .Values.customLdifFiles }}
- name: cm-custom-ldif-files
configMap:
name: {{ template "openldap.fullname" . }}-customldif
- name: custom-ldif-files
emptyDir:
medium: Memory
{{- else if .Values.customLdifCm }}
- name: cm-custom-ldif-files
configMap:
name: {{ .Values.customLdifCm }}
- name: custom-ldif-files
emptyDir:
medium: Memory
{{- end }}
{{- if .Values.customSchemaFiles }}
- name: cm-custom-schema-files
configMap:
name: {{ template "openldap.fullname" . }}-customschema
- name: custom-schema-files
emptyDir:
medium: Memory
{{- end }}
- name: readonly-ldif
configMap:
name: {{ template "openldap.fullname" . }}-readonly
- name: certs
emptyDir:
medium: Memory
{{- if .Values.initTLSSecret.tls_enabled }}
- name: secret-certs
secret:
secretName: {{ .Values.initTLSSecret.secret }}
{{- else }}
- name: secret-certs
emptyDir:
medium: Memory
{{- end }}
{{- range .Values.customFileSets }}
- name: {{ .name }}
configMap:
name: {{ template "openldap.fullname" $ }}-fs-{{ .name }}
{{- end }}
{{- if .Values.extraVolumes }}
{{- include "common.tplvalues.render" (dict "value" .Values.extraVolumes "context" $) | nindent 8 }}
{{- end }}
{{- if and (not .Values.persistence.existingClaim) .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: data
annotations:
{{- range $key, $value := .Values.persistence.annotations }}
{{ $key }}: {{ $value }}
{{- end }}
spec:
accessModes:
{{- range .Values.persistence.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- else if (not .Values.persistence.enabled) }}
- name: data
emptyDir: {}
{{- end }}
{{- end }}
What this PR does / why we need it:
necessary changes to support creating a statefulset comprising of read only replicas
Pre-submission checklist:
[x] Did you explain what problem does this PR solve? Or what new features have been added?
[ ] Have you updated the readme?
[x] Is this PR backward compatible? If it is not backward compatible, please discuss open a ticket first
Need to do reduce duplication as of right now there is huge duplication in statefulset, services, etc.