Closed beverlycodes closed 5 years ago
The minikube mount command sets the permissions to your user uid and gid by default. You should be able to use the --uid
and --gid
flags to minikube mount to set them to what you want them to be.
@r2d4 --uid
and --gid
flags worked only in conjunction with --9p-version=9p2000.L
, default "9p2000.u" don't work properly.
I know this because just yesterday was trying to make it work.
It might be worth it to refactor the flag behavior so users don't run into something like that.
Cue the first user running into something like that. I couldn't figure out what was going on but i was able to mount from inside the box with the L option myself. Then I found this. Thank you for the work around!
I'm seeing the same problem with Minikube v0.25.2 on Debian Stretch with the KVM2 driver, using the default Docker container engine. minikube mount
mounts the directory as owned by 1001:1001, regardless of what is specified in the --uid and --gid options, or if they are omitted. Since this corresponds to rkt:rkt rather than docker:docker, containers are unable to write to the volume. Adding the --9p-version=9p2000.L
option seems to work around the issue.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
I'm experiencing the same issue. Even if i specify the --uid and --gid in the mount command, the mounted directory inside the kubernetes VM ends up with the UID and GID from my workstation rather than the 1001 that is required by minikube. --9p-version=9p2000.L
also works around the problem for me.
host$ uname -a
Darwin Atlantis.local 17.7.0 Darwin Kernel Version 17.7.0: Thu Jun 21 22:53:14 PDT 2018; root:xnu-4570.71.2~1/RELEASE_X86_64 x86_64
host$ minikube version
minikube version: v0.28.1
host$ minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Downloading Minikube ISO
160.27 MB / 160.27 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
host$ eval $(minikube docker-env)
host$ docker version
Client:
Version: 18.05.0-ce
API version: 1.35
Go version: go1.9.5
Git commit: f150324
Built: Wed May 9 22:12:05 2018
OS/Arch: darwin/amd64
Experimental: true
Orchestrator: swarm
Server:
Engine:
Version: 17.12.1-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.4
Git commit: 7390fc6
Built: Tue Feb 27 22:20:43 2018
OS/Arch: linux/amd64
Experimental: false
host$ minikube mount ~/test:/test --uid 1001 --gid 1001 &
Mounting /Users/alex/test into /test on the minikube VM
This daemon process needs to stay alive for the mount to still be accessible...
ufs starting
host$ mkdir ~/test
host$ touch ~/test/foo
host$ minikube ssh -- touch /test/bar
touch: cannot touch '/test/bar': Permission denied
host$ minikube ssh -- ls -haltr /test
total 0
-rw-r--r-- 1 501 20 0 Jul 18 18:15 foo
host$ id -u
501
host$ id -g
20
@logicalmethods yep, nothing was done in this regard yet. Use minikube start --9p-version=9p2000.L
as a workaround.
I think you mean minikube mount /foo:/bar --9p-version=9p2000.L
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
i'm seeing the exact same issue as well. minikube v0.34.1, k8s 1.13.3, ubuntu 16.04LTS, kvm2 drivers (built per the instructions here: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver).
it's also maddening that i can't pass the --uid
, --gid
and --9p-version=9p2000.L
flags via minikube start
... this requires a separate minikube mount
command that i have to background and then manually kill that PID...
...which is highly suboptimal when running short-lived minikube instances launched via jenkins jobs. :(
Is this a BUG REPORT or FEATURE REQUEST?: BUG REPORT
Environment:
Minikube version: v0.24.1
What happened: minikube mount sets ownership of mounted files to uid 501, gid 20
What you expected to happen: minikube mount sets ownership of mounted files to uid 1001, gid 1001
How to reproduce it (as minimally and precisely as possible): minikube mount /Users/username:/var/mnt/username minikube ssh 'ls -ld /var/mnt/username'
drwxrwxr-x 1 501 20 1836 Dec 8 07:35 /var/mnt/username
Anything else do we need to know: The automounted /Users path in minikube is correctly owned by docker/docker. Under that path, I have not experienced any issues. Unfortunately, using the default /Users path forces PodSpecs with baked in usernames for volume mounts. Using minikube mount allows us to avoid coupling deployments to specific user directories, but the permissions appear to be causing IO errors. When trying to set up for use cases like live code reloading, some of our node projects fail their npm installs with EIO errors. When using the /Users path instead of a minikube mount path, those npm installs complete without issue.