Open mingfang opened 11 years ago
If you chown
the volume (on the host side) before bind-mounting it, it will work.
In that case, you could do:
mkdir /tmp/www
chown 101:101 /tmp/www
docker run -v /tmp/www:/var/www ubuntu stat -c "%U %G" /var/www
(Assuming that 101:101
is the UID:GID of the www-data
user in your container.)
Another possibility is to do the bind-mount, then chown
inside the container.
@mingfang Will chown not work for you ?
It would be useful to have a shortcut for this. I often find myself writing run
scripts that just set the permissions on a volume:
https://github.com/orchardup/docker-redis/blob/07b65befbd69d9118e6c089e8616d48fe76232fd/run
What if you don't have the rights to chown
it?
Would a helper script that chown
the volume solve this problem? This scirpt can be the ENTRYPOINT
of your Dockerfile.
Can I say no - forcing users to add a helper script that does
#!/bin/sh
chown -R redis:redis /var/lib/redis
exec sudo -u redis /usr/bin/redis-server
(thanks @bfirsh for your eg)
is pretty terrible.
It means that the container has to be started as root, rather than running as the intended redis
user. (as @aldanor alluded to )
and it means a user can't do something like:
docker run -v /home/user/.app_cfg/ -u user application_container application
:(
There is one way to make it work, but you need to prepare ahead of time inside your Dockrfile.
RUN mkdir -p /var/lib/redis ; chown -R redis:redis /var/lib/redis
VOLUME ["/var/lib/redis"]
ENTRYPOINT ["usr/bin/redis-server"]
USER redis
(I didn't test this example, I'm working on a chromium container that then displays on a separate X11 container that .... )
And of course that method only works for direct new volumes, not bind mounted or volumes-from volumes. ;)
Additionally, multiple containers using volumes-from
will have different uid/gid for the same user, which complicates stuff as well.
@SvenDowideit @tianon that method doesn't work either. Full example:
FROM ubuntu
RUN groupadd -r redis -g 433 && \
useradd -u 431 -r -g redis -d /app -s /sbin/nologin -c "Docker image user" redis
RUN mkdir -p /var/lib/redis
RUN echo "thing" > /var/lib/redis/thing.txt
RUN chown -R redis:redis /var/lib/redis
VOLUME ["/var/lib/redis"]
USER redis
CMD /bin/ls -lah /var/lib/redis
Two runs, with and without a -v volume:
bash-3.2$ docker run -v `pwd`:/var/lib/redis voltest
total 8.0K
drwxr-xr-x 1 root root 102 Aug 7 21:30 .
drwxr-xr-x 28 root root 4.0K Aug 7 21:26 ..
-rw-r--r-- 1 root root 312 Aug 7 21:30 Dockerfile
bash-3.2$ docker run voltest
total 12K
drwxr-xr-x 2 redis redis 4.0K Aug 7 21:30 .
drwxr-xr-x 28 root root 4.0K Aug 7 21:26 ..
-rw-r--r-- 1 redis redis 6 Aug 7 21:26 thing.txt
bash-3.2$
We're hitting an issue that would be solved by this (I think). We have an NFS share for our developer's home directories. Developers want to mount /home/dev/git/project
in to Docker but cannot because we have Root Squash enabled.
This forbids root from accessing /home/dev/git/project
so when I try and run docker mounting /home/dev/git/project
I get an lstat permission denied
error.
@frankamp This is because docker's current preference is to not modify host things which are not within Docker's own control.
Your "VOLUME" definition is being overwritten by your -v
pwd`:/var/lib/reds`.
But in your 2nd run, it is using a docker controlled volume, which is created in /var/lib/docker. When the container starts, docker is copying the data from the image into the volume, then chowning the volume with the uid:gid of the dir the volume was specified for.
I'm not sure there is much that can be done here, and unfortunately bind mounts do not support (as far as I can tell) mounting as a different uid/gid.
My solution to this was to do what SvenDowideit did above (create new user and chown up front in dockerfile), but then instead of mounting the host volume, use a data-only container, and copy the host volume I wanted to mount into the container with tar cf - . | docker run -i --volumes-from app_data app tar xvf - -C /data
. This will become a tad easier once https://github.com/docker/docker/pull/13171 is merged (and docker cp
works both ways), but perhaps it could become an alternative to -v host_dir:container_dir
, ie. maybe -vc host_dir:container_dir
, (vc for volume-copy), wherein the host_dir's contents would get copied into the data container. Though I can't say I understand why/how the copied files inherit the container user's permissions, from what I can tell they do, and this is the only reasonable solution I've managed to come up with that doesn't destroy portability.
What about acl?
Is there any fix or workaround? I run into same issue with OpenShift, mounted folder is owned by root:root and precreated images wont work.
I'm looking for a workaround too. If all mounted volumes are owned by root
, it makes it impossible to run your Docker containers with any user other than root
.
Well you can try s6-overlay. It includes features which are specifically targeted to help to work-around these kinds of problems.
@dreamcat4: Thanks for the pointer. The fixing ownership & permissions seems like an interesting workaround, but wouldn't I have to run my Docker container as root for that to work?
@brikis98 Yes that is true. However s6-overlay also has yet another feature, which allows you to drop the permissions back again when launching your servers / daemons.
@dreamcat4 Ah, gotcha, thanks.
I have the same uid/gid inside and outside of a container and this is what I get:
nonroot$ ls -l .dotfiles/
ls: cannot access .dotfiles/byobu: Permission denied
ls: cannot access .dotfiles/config: Permission denied
ls: cannot access .dotfiles/docker: Permission denied
ls: cannot access .dotfiles/vim: Permission denied
ls: cannot access .dotfiles/bashrc: Permission denied
ls: cannot access .dotfiles/muse.yml: Permission denied
ls: cannot access .dotfiles/my.cnf: Permission denied
ls: cannot access .dotfiles/profile: Permission denied
total 0
-????????? ? ? ? ? ? bashrc
d????????? ? ? ? ? ? byobu
d????????? ? ? ? ? ? config
d????????? ? ? ? ? ? docker
-????????? ? ? ? ? ? muse.yml
-????????? ? ? ? ? ? my.cnf
-????????? ? ? ? ? ? profile
d????????? ? ? ? ? ? vim
nonroot$ ls -l .ssh
ls: cannot access .ssh/authorized_keys: Permission denied
total 0
-????????? ? ? ? ? ? authorized_keys
nonroot$
@darkermatter could you please open a separate issue?
not a problem, but is this not relevant here?
@darkermatter this is a feature request, not a bug report, mixing your case with other cases makes it difficult to follow the discussion, also your issue may be not directly related
@thaJeztah well, as @frankamp and others have done, I was simply demonstrating what happens after running chmod, etc. inside the Dockerfile. I will file it as a bug report, but it is relevant to this discussion.
similar to what @ebuchman proposed, without copying a host volume, you could to create a data-only container first, that does a
chown 1000:1000 /volume-mount
as root when it got started.
E.g. in docker compose v2 syntax
version: '2'
services:
my-beautiful-service:
...
depends_on:
- data-container
volumes_from:
- data-container
data-container:
image: same_base_OS_as_my-beautiful-service
volumes:
- /volume-mount
command: "chown 1000:1000 /volume-mount"
This way your container can run as non-root user. The data-only container only runs once. Assuming you know the uid and gid that my-beautiful-service uses beforehand. It usually is 1000,1000.
Being that you can (in 1.11) specify mount options for a volume to use in your docker volume create
, I'd say this seems pretty close to being ready to close.
You can't just specify uid/gid directly because this is not supported with bind mounts, but many filesystems that you can use with the new mount opts can work with uid/gid opts.
I think the issue still sands in cases where you want to mount a CIFS drive inside your container however maybe that should be another ticket?
@michaeljs1990 You can do this, just not per-container (unless you create separate volumes for each uid/gid combo you want).
@cpuguy83, could you please clarify how one must use docker volume create
to avoid this issue?
I just ran into this issue today with docker 1.11 and had to do some painful rejiggering to convince the docker image to let me write to files on a mounted drive. It would be really nice if I never need to do that again let alone try to explain it to someone else.
Not sure if this is what you are asking but...
FROM busybox
RUN mkdir /hello && echo hello > /hello/world && chown -R 1000:1000 /hello
Build above image named as "test"
$ docker volume create --name hello
$ docker run -v hello:/hello test ls -lh /hello
Both /hello
and /hello/world
in the above example would be owned by 1000:1000
I see. So, I did something similar but a little different, which may make it worth sharing. Basically, I added a user to the Dockerfile that shared my UID, GID, username, and group for the user outside the container. All <...>
are things replaced by relevant values.
FROM <some_image>
RUN groupadd -g <my_gid> <my_group> && \
useradd -u <my_uid> -g <my_gid> <my_user>
After this, one can either switch using USER
or using su
at some later point (e.g. entrypoint script or when using a shell). This let me write to the mounted volume as I was the same user that created. One could additionally use chown
inside the container to make sure one has permissions on relevant things. Also, installing sudo
is generally a smart move when doing this too.
While it solves the problem, I don't know that I love it as this would need to be done for any user. Also, I hard-coded stuff (yuck!), but maybe templates could be used to make this a bit smoother. I wonder if this shim could be absorbed into docker run
somehow. If there is a better way to do this already, I'd be very interested to know what it is.
There is an option to map host users uids/gids with container users uids/gids with --userns-remap
. Personally I haven't tried it. See a good discussion on this topic http://stackoverflow.com/questions/35291520/docker-and-userns-remap-how-to-manage-volume-permissions-to-share-data-betwee .
@cpuguy83:
You can't just specify uid/gid directly because this is not supported with bind mounts, but many filesystems that you can use with the new mount opts can work with uid/gid opts.
What filesystems are you thinking of that can accept uid/gid arguments? I know FAT can, but that feels just as hacky as anything else being proposed in this thread.
IMO, Docker has two options:
USER
directive (and associated runtime flags).Being able to run as a non-root user while only being able to mount volumes owned by root is a misfeature. The sharing of uid/gid between host and container is another misfeature.
@mehaase volumes take the ownership of whatever is already at the path in the container. If the location in the container is owned by root, then the volume will get root. If the location in the container is owned by something else, the volume will get that.
Some sort of workaround for this would be great. Unless the container specifically expects it, it makes it very hard to add volumes to standard containers like elasticsearch, redis, couchDB, and many others without writing a custom Dockerfile that sets the permissions. This mostly makes the docker run -v
command or volume:
directive in docker-compose useless.
@chrisfosterelli why useless? I do not think it is out of the ordinary to set ownerships of files/dirs you expect to use.
@cpuguy83 Because it does not appear to be possible to set the ownership without using a custom Dockerfile that sets permissions and volumes, which is why think they are not useful for defining volumes. I'm not binding containers to my host filesystem, if that's relevant.
@chrisfosterelli But all these standard Dockerfiles should have the permissions already set.
I think what @chrisfosterelli is trying to say, @cpuguy83, (and please correct me if I am wrong @chrisfosterelli) is that it has become clear that these variables (UID, GID, etc.) are dynamic and need to be set at run-time (particular w.r.t. to files owned internally and from mounted volumes), but we lack a way to do that currently. The response thus far seems to be they shouldn't be run-time determined, but that is ignoring the fundamental usability problem presented by such a suggestion. Again if I am misunderstanding any of this please feel free to correct me.
@jakirkham I must not be understanding what the usability problem is. The files are in the image, they should have the ownership and permissions required for the application to run. It has nothing to do with the volume itself. The volume just takes on what was set in the image.
@cpuguy83 I did a bit more digging and isolated it to this: Say I have an elasticsearch container that will create a directory /data
when starting up (if no data is present), then use docker run -v /data elasticsearch
. The directory /data
becomes owned by root:root
and the daemon that runs as elasticsearch
inside the container will now fail to start because it cannot write to /data
.
It'd be ideal if I could set this volume to be owned by elasticsearch without needing a custom Dockerfile... although I guess you could argue this sort of issue should be resolved in the upstream image.
@chrisfosterelli there is some talk on the kernel mailing lists of having an overlay like driver that can change ownership but there is not much we can do without something like that. I am curious, can you just make all the files in your volume world read and write and set umasks appropriately so new files are too? (I havent tried yet).
@justincormack I believe so, but I think that doesn't work when I'm expecting the container to create the data in the volume (rather than the host). I understand this is kind of a weird issue, so I am currently addressing it by fixing it in the upstream Dockerfile itself to mkdir -p && chmod
the directory.
@chrisfosterelli thats why I said set the umask, if your umask is 000
(in the container) all new files will be created with 666
or 777
permissions, and if the mount point is 777
to start with that should be ok? If the permissions are always world read and write, uid and gid should not matter?
@justincormack Yes that sounds correct... how can I do that while creating a docker container with a non-host-mounted volume?
@chrisfosterelli hmm, thats a good question. It looks to me like the permissions on a new volume are what the default umask would give, so you could try running the docker daemon with a 000
umask and see if then the volume is world writeable. Maybe we should have some permissions options on docker volume create
.
(you could fix it up with a root container that did chmod
and exited too but thats ugly)
On create is no good. The issue is if the container doesn't have the path, the path gets created with root. This could arguably be done as whatever the passed in user is.
@cpuguy83 I think it would make more sense to create it as the user passed in with -u since that would probably be the user trying to write the volume anyway from inside the container right?
I was able to mount as the user of my choice using the below steps:
Use case: mount a volume from host to container for use by apache as www user. The problem is currently all mounts are mounted as root inside the container. For example, this command docker run -v /tmp:/var/www ubuntu stat -c "%U %G" /var/www will print "root root"
I need to mount it as user www inside the container.