docker / for-mac

Bug reports for Docker Desktop for Mac
https://www.docker.com/products/docker#/mac
2.43k stars 118 forks source link

Persistent Volume mounted with wrong permissions #2494

Open markxnelson opened 6 years ago

markxnelson commented 6 years ago

Expected behavior

I have a persistent volume defined, pointing to a directory on my host (macOS 10.13.2 running Docker Version 18.01.0-ce-mac48 (22004), ee2282129d, Kubernetes: v1.8.2), and a persistent volume claim, and a pod with a container that has a volume mount pointing to that PVC. I expect the mount to show up in my container as a directory /shared which is owned by root with 777 permissions, this is what happens when using kubernetes (outside of docker).

Actual behavior

The directory appears as owned by root but with 755 permissions, meaning the user cannot write to the directory.

Information

This appears to be because docker with embedded kubernetes is using a bind mount. I also tried adding the SYS_ADMIN cap to the container, which I know is needed to allow non-root users to access bind mounts. No change to the behavior.

Steps to reproduce the behavior

  1. install docker and enable kubernetes
  2. create a directory on the mac to be shared as the persistent volume storage, e.g. sudo mkdir -m 777 -p /scratch/k8s_dir/persistentVolume001
  3. define a persistent volume using this location
    apiVersion: v1
    kind: PersistentVolume
    metadata:
    name: pv001
    spec:
    storageClassName: domain1
    capacity:
    storage: 10Gi
    accessModes:
    - ReadWriteMany
    persistentVolumeReclaimPolicy: Retain
    hostPath:
    path: "/scratch/k8s_dir/persistentVolume001"
  4. define a persistent volume claim
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
    name: pv001-claim
    spec:
    storageClassName: domain1
    accessModes:
    - ReadWriteMany
    resources:
    requests:
      storage: 10Gi
  5. start a pod with a container that mounts this PVC:
    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
    spec:
    template:
    metadata:
      labels:
        app: my-pod
    spec:
      restartPolicy: Always
      containers:
        - name: my-pod
          image: debain
          # you can also add the securityContext to add SYS_ADMIN cap - I get same behavior with or without this...
          securityContext:
            capabilities:
              add: ["SYS_ADMIN"]
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 7001
          volumeMounts:
          - mountPath: /shared
            name: pv-storage
      volumes:
        - name: pv-storage
          persistentVolumeClaim:
            claimName: pv001-claim
    1. jump into the container and check permission on /shared: on docker for mac I see this:
      $ ls -al | grep shared 
      drwxr-xr-x   2 root   root     40 Jan 23 18:55 shared

      on kubernetes (outside docker) i see this:

      $ ls -al | grep shared 
      drwxrwxrwx   2 root   root     40 Jan 23 18:55 shared

FYI, docker inspect shows:

 "HostConfig": {
            "Binds": [
                "/scratch/k8s_dir/persistentVolume001:/shared"
...
 "CapAdd": [
                "SYS_ADMIN"
            ],
...
  "Mounts": [
            {
                "Type": "bind",
                "Source": "/scratch/k8s_dir/persistentVolume001",
                "Destination": "/shared",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
...
markxnelson commented 6 years ago

Ok, I did some more research, and I found that /scratch is reserved by docker, so you are not allowed to share a directory in that path. When I moved my PV under my home directory, to /Users/marnelso/scratch/k8s_dir/persistentVolume001, it works as expected. Leaving this issue open since it would be nice if this restriction were added to the documentation. Please fee free to close if you prefer.

docker-robott commented 6 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale comment. Stale issues will be closed after an additional 30d of inactivity.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so.

Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle stale

djs55 commented 6 years ago

Thanks for your report! I think documentation is a good idea.

/remove-lifecycle stale /lifecycle frozen

sniffer72 commented 6 years ago

The solution that I found for this was to simply add the following under the containers section of your deployment yaml. Higher than the container didn't solve write permissions.

    securityContext:
      runAsUser: 0
      fsGroup: 0

I did test it with other users that were not root and putting at the container level did work.

    securityContext:
      runAsUser: jenkins
      fsGroup: jenkins

Justin

yxxhero commented 5 years ago

@sniffer72 thanks your answer!!!

karandaid commented 2 years ago

that works for me!! thanks @sniffer72