Open ghost opened 8 years ago
@efocht could you decouple the IP addressing from the container networking in your environment? In our environment we run keepalived on the hosts to provide floating IP addresses and then tcp-proxy back into the container networks. This won’t work if you need to dynamically assign IP addresses but you can proxy to different containers based on port instead of address.
@matthanley thank you for the hint, we will consider this solution. I'm a bit reluctant to add another high availability layer, though. Swarm's capabilities would actually be sufficient.
I don't want to hijack the thread to solve my issue, just wanted to report that we also have a use case for the addition of --cap-add option to services and hope that the July 20 proposal from @thaJeztah will be implemented and pushed soon.
I don't think we'll be adding --privileged
or --cap-add
. See #32801 for the current way we want to solve this both for swarm and k8s. There was a talk on this at DockerCon EU... struggling to find the video at the moment.
Actually I guess the talk was at Moby Summit. These videos have not been posted yet.
I need the option --priviledged
for running jenkins in a swarm. Jenkins in my configuration needs to run docker within docker, because all builds are started in a fresh docker container.
See https://github.com/mwaeckerlin/jenkins
Advantages:
So for me, running docker in docker is an absolut must requirement to be able to migrate jenkins from current single host local docker to my new docker swarm environment.
AFAIK, running docker in docker requires --priviledged
, so for me, --priviledged
is also an absolute must requirement to docker swarm, unless you can show me another solution for running docker containers in docker services.
@mwaeckerlin I'm don't have the details of what you're trying to do, but I'm thinking that you may not need docker in docker, but start sibling dockers from docker by mounting the docker socket as a volume.
Have a read at this: https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
@jonasdddev, well I don't really bother the «bad», «ugly» and «worse», because I have a working setup. But I'll give that solution mentioned there a try and just bind-mount /var/run/docker.sock
into the jenkins image.
@mwaeckerlin Dito, don't care about it either, but was in your situation even before Swarm and that solution solved all my perceived problems. Just let me know for my info if it was successful.
Thanks.
I have same use case as @man4j. I want to use gluster within swarm and I need priviillaged= true, is there any work around to make it happen? I get "setting extended attributes" error
Hi all. I have a workaround. I create a swarm service from my image with next entrypoint:
exec docker run --init --rm --privileged=true --cap-add=ALL ...etc
It's all. When service tasks are created they run my privileged container and when tasks are destroyed SIGTERM kills child privileged container thanks to --init flag.
And of course we need to map docker.sock and install docker inside image for parent service.
Dockerfile:
FROM docker RUN apk --no-cache add bash COPY entrypoint.sh / RUN chmod +x /entrypoint.sh ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
exec docker run --init --rm --privileged=true --cap-add=ALL ...etc
service-run.sh:
docker service create --name myservice --detach=false \ --mount "type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock" myimage
Waiting for your thoughts!
@man4j this is the same workaround @port22 mentioned in https://github.com/moby/moby/issues/24862#issuecomment-326124626
Just mounting /var/run/docker.sock
as volume does not work, @jonasddev. Just because the group id of group docker is different inside the container than outside, so if the user inside the container is not root, but only member of group docker, as it should be, then it has no access to the socket!
Inside the container:
root@docker[69686cddbd76]:/var/lib/jenkins# grep docker /etc/group
docker:x:111:jenkins
root@docker[69686cddbd76]:/var/lib/jenkins# ls -l /var/run/docker.sock
srw-rw---- 1 root 120 0 Jan 6 01:08 /var/run/docker.sock
Outside of the container:
marc@raum:~$ grep docker /etc/group
docker:x:120:marc
marc@raum:~$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 Jan 6 02:08 /var/run/docker.sock
Any good solution for this?
I'm facing this issue as well when trying to deploy GitLab in Docker Swarm
The package needs to set values but it can't and the service just keep restarting
/opt/gitlab/embedded/bin/runsvdir-start: line 24: ulimit: pending signals: cannot modify limit: Operation not permitted
/opt/gitlab/embedded/bin/runsvdir-start: line 37: /proc/sys/fs/file-max: Read-only file system
@jonasddev, I updated my jenkins image to fix the permissions within the container, I added to my entry-point-script:
# add user to group that has access to /var/run/docker.sock
addgroup --gid $(stat -c '%g' /var/run/docker.sock) extdock || true
usermod -a -G $(stat -c '%g' /var/run/docker.sock) jenkins || true
Now the next problem:
/var/run/docker.sock
This is the problem in a mwaeckerlin/jenkins
container:
ubuntu$ docker run -d --restart unless-stopped -v /var/run/docker.sock:/var/run/docker.sock --name jenkins -p 8080:8080/tcp -p 50000:50000/tcp --volumes-from jenkins-volumes mwaeckerlin/jenkins
ubuntu$ docker exec -it -u jenkins jenkins bash
jenkins$ docker create -v /var/lib/jenkins/workspace/mrw-c++.rpm/distro/fedora-27:/workdir -v /var/lib/jenkins/.gnupg:/var/lib/jenkins/.gnupg -e LANG=en_US.UTF-8 -e HOME=/var/lib/jenkins -e TERM=xterm -e DEBIAN_FRONTEND=noninteractive -e DEBCONF_NONINTERACTIVE_SEEN=true -e BUILD_NUMBER=64 -w /workdir fedora:27 sleep infinity
cd93d11c2634b4e4094cc2541996e30a4c08c60923b8896e0bd7e439c7d9c673
jenkins$ docker start cd93d11c2634b4e4094cc2541996e30a4c08c60923b8896e0bd7e439c7d9c673
cd93d11c2634b4e4094cc2541996e30a4c08c60923b8896e0bd7e439c7d9c673
jenkins$ ls /var/lib/jenkins/workspace/mrw-c++.rpm/distro/fedora-27
AUTHORS COPYING mrw-c++.spec.in
autogen.sh debian NEWS
ax_check_qt.m4 demangle.h README
ax_cxx_compile_stdcxx_11.m4 dependency-graph.sh resolve-debbuilddeps.sh
ax_init_standard_project.m4 doc resolve-rpmbuilddeps.sh
bootstrap.sh examples rpmsign.exp
build-in-docker.conf INSTALL sql-to-dot.sed
build-in-docker.sh mac-create-app-bundle.sh src
build-resource-file.sh makefile.am suppressions.valgrind
ChangeLog makefile_test.inc.am template.sh
checkinstall.sh mrw-c++.desktop.in test
configure.ac mrw-c++-minimal.spec.in valcheck.sh
jenkins$ docker exec -u 107 -it 033088f9008601d5f9f9034744b579910ee1f88bdf3a61d2f0354b8454ba94ed bash
docker$ ls /workdir
docker$ mount | grep /var/lib/jenkins/workspace/mrw-c++.rpm/distro/fedora-27
/dev/mapper/big-root_crypt on /workdir type btrfs (rw,relatime,space_cache,subvolid=257,subvol=/@/var/lib/jenkins/workspace/mrw-c++.rpm/distro/fedora-27)
docker$
So in a jenkins-container (here named jenkins
, a container (here named docker
) is started. A directory from the jenkins comtainer is mounted int the container in the container, but there, it is empty. The mount is visible in the mount
command.
Any idea?
BTW: #21109
The problem is describe here in a coment: http://container-solutions.com/running-docker-in-jenkins-in-docker/
When using docker run inside the jenkins container with volumes, you are actually sharing a folder of the host, not a folder within the jenkins container. To make that folder “visible” to jenkins (otherwise it is out of your control), that location should have a parent location that matches the volume that was used to run the jenkins image itself.
Of course, I want to mount from docker container to the container in the container and not from the host.
It is even a huge security hole: The docker container must not have access to the filesystem of the host where it is runnning!
Any simple solution for this?
If using -v /var/run/docker.sock:/var/run/docker.sock
for docker in docker, you get full access to the host and you are no more jailed within the container!
Here a demonstration of the security issue mounting the socket:
marc@jupiter:~/docker/dockindock$ echo 'this is the host' > /tmp/test
marc@jupiter:~/docker/dockindock$ docker run -d --name dockindock -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker mwaeckerlin/dockindock sleep infinity
d44fbd58e44a180e388621d39aff64652bc4118973c2cbc86a1738ffb481ebaf
marc@jupiter:~/docker/dockindock$ docker exec -it dockindock bash
root@docker[d44fbd58e44a]:/# cat /tmp/test
cat: /tmp/test: No such file or directory
root@docker[d44fbd58e44a]:/# echo 'this is the outer container' > /tmp/test
root@docker[d44fbd58e44a]:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d44fbd58e44a mwaeckerlin/dockindock "sleep infinity" 16 minutes ago Up 16 minutes dockindock
root@docker[d44fbd58e44a]:/# docker run -it --rm -v /tmp:/tmp mwaeckerlin/ubuntu-base bash
root@docker[f197131c54c7]:/# cat /tmp/test
this is the host
So, the solution of using -v /var/run/docker.sock:/var/run/docker.sock
is a security issue and an absolute no-go! @jonasddev
So either we get --privileged
for swarm, or there is need for any other, better solution, that solves this issue!
Here a demonstration of how it should work, see the restricted access:
marc@jupiter:~/docker/dockindock$ echo 'this is the host' > /tmp/test
marc@jupiter:~/docker/dockindock$ docker run -d --name dockindock --privileged mwaeckerlin/dockindock
655c67b2a6d9f06da8bf630889710ee596f006331a036dc009c86a9e04ea0201
marc@jupiter:~/docker/dockindock$ docker exec -it dockindock bash
root@docker[655c67b2a6d9]:/# cat /tmp/test
cat: /tmp/test: No such file or directory
root@docker[655c67b2a6d9]:/# echo 'this is the outer container' > /tmp/test
root@docker[655c67b2a6d9]:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@docker[655c67b2a6d9]:/# docker run -it --rm -v /tmp:/tmp mwaeckerlin/ubuntu-base bash
Unable to find image 'mwaeckerlin/ubuntu-base:latest' locally
latest: Pulling from mwaeckerlin/ubuntu-base
1be7f2b886e8: Pull complete
6fbc4a21b806: Pull complete
c71a6f8e1378: Pull complete
4be3072e5a37: Pull complete
06c6d2f59700: Pull complete
04fca7013ee9: Pull complete
7a66494bf7fe: Pull complete
be1530d02718: Pull complete
57cb4fb92cd1: Pull complete
4170a785b84a: Pull complete
36570a7926c8: Pull complete
34218f1ce9d6: Pull complete
Digest: sha256:e9207a59d15739dec5d1b55412f15c0661383fad23f8d2914b7b688d193c0871
Status: Downloaded newer image for mwaeckerlin/ubuntu-base:latest
root@docker[bcfc1e9bc756]:/# cat /tmp/test
this is the outer container
As you see, there is no access to the images from outside of the outer container, docker ps
does not show the host's containers and the directory cannot be mounted from outside of the container, but view is limited to the outer container. This is real encapsulation, that's how it must be.
There is no solution for docker in docker in a docker swarm, unless option --privileged
is supported in docker swarm!
The option --privileged
is a must to have docker containers in docker containers, encapsulated without access to the whole swarm!
Please add, needed for fuse!
So, the solution of using -v /var/run/docker.sock:/var/run/docker.sock is a security issue and an absolute no-go!
Correct; if you’re blind-mounting the socket, you’re not running docker-in-docker; you’re controlling the hosts daemon from inside the container; in many cases this may actually be preferable, see http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
The option --privileged is a must to have docker containers in docker containers, encapsulated without access to the whole swarm!
First of all, only manager nodes have access to the whole swarm; worker nodes can only control the worker node itself
But be aware that —privileged
is equivalent to having full root access on the host; there is no protection whatsoever, processes inside the container can escape the container, and have full access to the host (for example, have a look at /dev
inside a privileged container, and you see it has access to all devices from the host)
@thaJeztah writes:
But be aware that —privileged is equivalent to having full root access on the host; there is no protection whatsoever, processes inside the container can escape the container, and have full access to the host (for example, have a look at /dev inside a privileged container, and you see it has access to all devices from the host)
Then let's adapt the requirement. What is a good solution to have:
For me, not the --privileged
flag is important, but the real and secure docker in docker.
Possible use cases:
Currently I am running a jenkins server in a local docker container using --privileged
, and I would like to migrate this to a swarm. The jenkins instanciates a dedicated docker container for every build, i.e. for cross-build to windows and to any linux distribution, e.g. to build dep-packaged for ubuntu xenial, it runs the build in a container from mwaeckerlin/ubuntu:xenial-amd64.
@mwaeckerlin
We need to get nested runc to work without root privileges first. You may want to follow this issue: https://github.com/opencontainers/runc/issues/1658
I'd like to add that Kubernetes does support adding capabilities. For an example, check out https://caveofcode.com/2017/06/how-to-setup-a-vpn-connection-from-inside-a-pod-in-kubernetes/ (this is the exact same use case I have, and that I'd like to run on docker swarm).
Any news ?
the news is all k8!
+1
Gave up on this and switched to Kubernetes.
On Jun 15, 2018, at 19:02, Denis notifications@github.com wrote:
+1
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.
Just first off, thank you @thaJeztah and co-devs for being attentive on this. I'm sure this is a point of contention among yourselves.
Really just came here to say that I've been successful with bind mounting the docker socket and using docker binaries in the container with Jenkins in a swarm (not bind mounting the binaries). In that specific case, I've constrained Jenkins to only run on manager nodes. Anyone should make sure the permissions are set correctly, especially when you're working with GlusterFS, as all of my Jenkins nodes run on top of GlusterFS volumes with no issues so far (been about 6 months).
The --privileged
flag was something I used when prototyping but when it wasn't available I found that just setting the correct permissions did the trick for my purposes; something everyone should be aware of and practicing when bind mounting anyway. My case is different than many others, so your mileage will vary.
Damn, forgot to come back and congratulate this issue on its second birthday.
zabbix-agent need privileged mode to monitor host resources.
Need privileged mode to run systemd enabled containers! Like dogtag-ca for instance.
Trying to deploy dell openmanage exporter and it doesn't work. I understand that --priv / --cap-add can be a security issue, but if I want to shoot myself in the foot, I should be able to do so.
Hey guys, I've been maintaining my own poorly-patched fork of docker for a while with support for --cap-add
and --privileged
in docker stack deploy
, maybe we should create a proper fork?
@manvalls I think --cap-add
(or --capabilities
, i.e., not merging defaults, but require the full set to be specified) is something that would be accepted; have you considered contributing, and opening a pull request to discuss that option?
Hi @thaJeztah, this has already been done: https://github.com/moby/moby/pull/26849 https://github.com/docker/swarmkit/pull/1565
Like many people here I've been closely monitoring these issues for a long time until it came clear that there were only two options: switching to kubernetes or forking docker.
I'm pretty sure I'm not the only one who opted to gather all those PR together and maintain their own docker fork, and I'm really glad I did so because I, like many others, love what you guys did with docker swarm.
If adding these features to upstream docker in a convenient way is against its principles, which really seems to be the case, it feels only natural for all those forks which lots of people are likely using already to unite together. Maybe even you guys could help maintain it, or even own it?
@manvalls the PR you linked to was closed because that implementation did --cap-add
/ --cap-drop
. The proposal was to have a --capabilities
flag (which would override the defaults)
https://github.com/moby/moby/pull/26849#issuecomment-249176129 :
I think it is better long term if we switch to a
--capabilities
interface which says which you actually need, even if you might get more by default for compatibility.
If someone wants to work on that, that's a change that will likely get accepted.
Also need device, privileged, cap_add / cap_drop. No progress here?
@pwFoo A workaround is to bind-mount /var/run/docker.sock
and call docker run
within service containers.
Maintainers: Although I agree we should support "entitlements" in the long-term plan, as we already have been supporting bind-mounting arbitrary path including docker.sock
, I don't see any security degradation in implementing docker service create --privileged
.
@pwFoo there is that #26849 PR which you can take as template and just implement asked changes to get it merged (look comments above). That beauty of open source that you can do that yourself 😉
It is still possible to get that feature out on 19.03 if someone just take to step and start implement it.
EDIT: Looks that #26849 have been reopened today :)
I'm new with Go. So I don't think I should do it... But maybe someone with more experience could do it.
What's the state of this issue? (Docker version 17.12.1-ce)
needs somebody to take over https://github.com/moby/moby/pull/26849
FYI. I started implementing capabilities feature. You can see plan and status on https://github.com/moby/moby/issues/25885#issuecomment-447657852
Pleeeeaaaaase add this feature in :(
Pleeeeaaaaase don't send useless messages here. It just slowdown the process as lot of people get notified and that time is away from actual implementation.
My plan is to get #38380 released as part of 19.03 and then actual Swarm side changes on version which comes after that (19.06 I guess). Anyone who want to help plz test that and tell comments on PR so then hopefully we can stay on that schedule.
This issue is open since 2016 is there a chance that this gets added to Docker in the next years.... ? This is a must have feature, for running Docker in Docker on a swarm. Please Docker team.
Sorry but this is very frustrating for me. I can't switch to a HA setup because of this :(
Please, look my question about CLI side implementation on https://github.com/moby/moby/issues/25885#issuecomment-501017588
and here you can also comment use cases which actually would need --privileged
switch or can all those be handled by defining all needed capabilities? (nothing prevents user to list all of them)
Hi I to seek for the possiblity to attach a device such as the /dev/ttyACM0 to a stack deployed service. I created a script to run based on crontab every 5 min. (on the host of every node) If the script see´s my special device attached. Then it will raise a node flag. This flag is then used as a constraint to what node the service is allowed:
If lets say node1 failes then i can move the usb-dongle to node2 and the service is started there.
If you are interested in the cron script then look here: https://github.com/SySfRaMe/Docker_ZwaveHW_Flag
I am trying to run ip netns exec <namespace> iptables -L
from within a swarm container, and it seems that without --privileged
, I'm unable to do so. I'm guessing its because /proc
, /sys
, cgroup
etc are mounted as RO. Does anyone know of a workaround for this?
If not, it seems my container must have --privileged
in swarm.
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.): Ubuntu 14.04 VM under KVM running Docker enginer 1.12 RC4
Steps to reproduce the issue:
Describe the results you received: I can run "docker run --privileged" to allow an NFS mount from within my container, however there is no way to pass this --privileged flag to "docker service" and if I do not pass the --privileged flag, the contain would error internally on the mount like:
Describe the results you expected: I should be able to have my container mount an NFS server from within it. I do not want to do this externally or via a docker volume, for example, I am trying to drive a huge number of parallel containers running NFS mounts and I/O individually.
Additional information you deem important (e.g. issue happens only occasionally):