osixia / docker-openldap

OpenLDAP container image 🐳🌴
MIT License
4.02k stars 973 forks source link

Permission denied at /container/run in Kubernetes with Persistent Volumes #184

Open istvanszoke opened 6 years ago

istvanszoke commented 6 years ago

Hello,

I am trying to deploy to Kubernetes 1.8 with persistent volumes with the following yaml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ldap-persistent
  labels:
    app: ldap-persistent
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: ldap-persistent
    spec:
      containers:
        - name: ldap
          image: osixia/openldap:1.1.11
          args: ["loglevel --debug"]
          volumeMounts:
            - name: ldap-data
              mountPath: /var/lib/ldap
            - name: ldap-config
              mountPath: /etc/ldap/slapd.d
            - name: ldap-certs
              mountPath: /container/service/slapd/assets/certs
          securityContext:
            runAsUser: 65534
          ports:
            - containerPort: 389
              name: openldap
          env:
            - name: LDAP_LOG_LEVEL
              value: "256"
            - name: LDAP_ORGANISATION
              value: "Random Org"
            - name: LDAP_DOMAIN
              value: "randomorg.com"
            - name: LDAP_ADMIN_PASSWORD
              value: "admin"
            - name: LDAP_CONFIG_PASSWORD
              value: "config"
            - name: LDAP_READONLY_USER
              value: "false"
            - name: LDAP_READONLY_USER_USERNAME
              value: "readonly"
            - name: LDAP_READONLY_USER_PASSWORD
              value: "readonly"
            - name: LDAP_RFC2307BIS_SCHEMA
              value: "false"
            - name: LDAP_BACKEND
              value: "hdb"
            - name: LDAP_TLS
              value: "false"
            - name: LDAP_TLS_CRT_FILENAME
              value: "ldap.crt"
            - name: LDAP_TLS_KEY_FILENAME
              value: "ldap.key"
            - name: LDAP_TLS_CA_CRT_FILENAME
              value: "ca.crt"
            - name: LDAP_TLS_ENFORCE
              value: "false"
            - name: LDAP_TLS_CIPHER_SUITE
              value: "SECURE256:+SECURE128:-VERS-TLS-ALL:+VERS-TLS1.2:-RSA:-DHE-DSS:-CAMELLIA-128-CBC:-CAMELLIA-256-CBC"
            - name: LDAP_TLS_VERIFY_CLIENT
              value: "demand"
            - name: LDAP_REPLICATION
              value: "false"
            - name: LDAP_REPLICATION_CONFIG_SYNCPROV
              value: "binddn=\"cn=admin,cn=config\" bindmethod=simple credentials=$LDAP_CONFIG_PASSWORD searchbase=\"cn=config\" type=refreshAndPersist retry=\"60 +\" timeout=1 starttls=critical"
            - name: LDAP_REPLICATION_DB_SYNCPROV
              value: "binddn=\"cn=admin,$LDAP_BASE_DN\" bindmethod=simple credentials=$LDAP_ADMIN_PASSWORD searchbase=\"$LDAP_BASE_DN\" type=refreshAndPersist interval=00:00:00:10 retry=\"60 +\" timeout=1 starttls=critical"
            - name: LDAP_REPLICATION_HOSTS
              value: "#PYTHON2BASH:['ldap://ldap-one-service', 'ldap://ldap-two-service']"
            - name: KEEP_EXISTING_CONFIG
              value: "false"
            - name: LDAP_REMOVE_CONFIG_AFTER_SETUP
              value: "true"
            - name: LDAP_SSL_HELPER_PREFIX
              value: "ldap"
      volumes:
        - name: ldap-data
          persistentVolumeClaim:
            claimName: ldap-data-pv-claim
        - name: ldap-config
          persistentVolumeClaim:
            claimName: ldap-config-pv-claim
        - name: ldap-certs
          persistentVolumeClaim:
            claimName: ldap-certs-pv-claim

I am getting these errors:

*** CONTAINER_LOG_LEVEL = 3 (info)
*** Killing all processes...
Traceback (most recent call last):
  File "/container/tool/run", line 890, in <module>
    main(args)
  File "/container/tool/run", line 775, in main
    setup_run_directories(args)
  File "/container/tool/run", line 361, in setup_run_directories
    os.makedirs(directory)
  File "/usr/lib/python2.7/os.py", line 150, in makedirs
    makedirs(head, mode)
  File "/usr/lib/python2.7/os.py", line 157, in makedirs
    mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/container/run'

If I don't specify the security context I get different errors

*** CONTAINER_LOG_LEVEL = 3 (info)
*** Search service in CONTAINER_SERVICE_DIR = /container/service :
*** link /container/service/:ssl-tools/startup.sh to /container/run/startup/:ssl-tools
*** link /container/service/slapd/startup.sh to /container/run/startup/slapd
*** link /container/service/slapd/process.sh to /container/run/process/slapd/run
*** Set environment for startup files
*** Environment files will be proccessed in this order : 
Caution: previously defined variables will not be overriden.
/container/environment/99-default/default.startup.yaml
/container/environment/99-default/default.yaml
 To see how this files are processed and environment variables values,
run this container with '--loglevel debug'
*** Running /container/run/startup/:ssl-tools...
*** Running /container/run/startup/slapd...
chown: changing ownership of '/var/lib/ldap': Operation not permitted
*** /container/run/startup/slapd failed with status 1
 *** Killing all processes...

This yaml worked with Kubernetes 1.7 but now it doesn't with 1.8. I don't understand how I get permission denied on /container/run since it is not nfs.

Any comment on the issue would really help!

Thanks!

Dexyon commented 6 years ago

I can remember I had the same issue and started using:

args: ["--copy-service"]

istvanszoke commented 6 years ago

Thanks for the reply! Unfortunately it didn't solve my issue :(

SoberChina commented 6 years ago

This privileged: true parameter is very important.

jeefuji commented 6 years ago

Hello everyone, I have exactly the same problem on Openshift Origin (with Kubernetes under the hood), I have the feeling that it's related to security measure about "root" execution in container.

I'll look into it shortly :)

jeefuji commented 6 years ago

I did tested a little, and yes, in my case, it was clearly related to the root restricted use in Openshift (and maybe could the same for OP in Kubernetes).

FYI, here is the command line (for Openshift) to execute to solve this problem (by allowing root execution in container for the specified project name) : oc adm policy add-scc-to-user anyuid -z default -n <yourprojectname>

Need to find how to do it for Kubernetes, but it's cleary related to Security-Context concept...

But keep in mind that, sanitizing directly the container to avoid root call is way better. Cheers

jxadro commented 6 years ago

I have same issue in a pure k8s cluster, not only OpenShift. I guess is related to a change in k8s behavior due to a security issue: https://github.com/coreos/bugs/issues/2384

I think this is an issue that must be solved in the ldap container.

steven166 commented 5 years ago

I'm running into the same issues as @istvanszoke, with the official Helm chart on Kubernetes 1.9.7-gke.3 with Security policy's enabled.

Is there any work around to get this up and running?

aelkz commented 5 years ago

Getting the same error here. Does anyone solve this matter?

sri-prasanna commented 5 years ago

I am running into the same error, trying to run in OCP. Does anyone have a fix for this? I have tried the --copy-service option and it did not work.

jxadro commented 4 years ago

Hi, OKD 3.11. I could workaround with:

oc adm policy add-scc-to-user anyuid -z default -n (as said by jeefuji)

But I also had to edit the openldap deployment and use emptyDir for the volume "ldap-certs" instead of hostPath

Oznup commented 4 years ago

Hello, I have exactly the same issue on a k8s cluster, did someone manage to make it work ?

secret104278 commented 4 years ago

I encounter similar issue today. However, my k8s cluster is created by kind, and I found the root cause is related to apparmor, and can be solved by disable apparmor profile on host machine. Details: https://github.com/kubernetes-sigs/kind/issues/1224

patlachance commented 4 years ago

I encounter similar issue today. However, my k8s cluster is created by kind, and I found the root cause is related to apparmor, and can be solved by disable apparmor profile on host machine. Details: kubernetes-sigs/kind#1224

hello, disabling security features such as apparmor or selinux is really bad on mutualized kubernetes clusters as it break container isolation.

could be solved if someone create a PR for #398

jxadro commented 4 years ago

Apart of this: https://github.com/osixia/docker-openldap/issues/184#issuecomment-542429421

I have faced also this issue if using NFS for the Persistent Volumes and depending on the /etc/exports of the NFS Server. I could make it work exposing the NFS file system as:

/mnt/mcm-openldap 10.10.0.0/24(rw,sync,no_root_squash)