Closed patrickceg closed 6 years ago
Cool! Using Docker with/for CI is definitely an interesting topic.
I'd recommend reading https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ and perhaps refining the approach. I think the journey from docker-in-docker-based-CI to docker-next-to-docker-based-CI would be a very interesting one to hear about!
Ok, I'll fumble around putting said fumbling on Github and report back when I have something that isn't so hacky!
Binding the docker socket into the "CI" container is def the defacto way to go.
An example of this is the drone CI tool uses a docker run
command like
sudo docker run \
--volume /var/lib/drone:/var/lib/drone \
--volume /var/run/docker.sock:/var/run/docker.sock \
--env-file /etc/drone/dronerc \
--restart=always \
--publish=80:8000 \
--detach=true \
--name=drone \
drone/drone:0.4
Then your CI container only needs a docker client to create other containers, albeit sibling containers.
I was cleaning up my Github page and noticed this issue still kicking around... I have switched over to exposing the Docker socket as mentioned, so I'll close this.
I can talk about / provide some code for this seemingly odd construct I recently conjured:
Linux server with Docker Engine installed that has Jenkins build slaves running inside containers. The Jenkins build slave containers are running Docker engine as well to allow builds that create and run containers. ...so it's Docker inside a Docker container that Jenkins can use to build Docker images :)
I had some issues getting this contraption to work (get Jenkins to talk to the container, launching dockerd inside a container, getting Jenkins to be able to use "sudo docker run" inside the container), so it may be interesting.
(see https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds if you are unfamiliar with what a Jenkins slave is)
Story: I was really unhappy with two hypervisors I had for test, and they died at the same time. Instead of reinstalling the hypervisor that I didn't like, I decided to see what would happen if I instead just put a Linux OS and Docker Engine, and replaced every single one of the VMs (which each were just an OS and a single service) with Docker containers. Now: A few of our test builds spin up a Docker container to do some stuff, and then exit, sending the output to a volume mounted by the container. ...since I needed the new containers to be able to reproduce this behaviour as well, that means I have to run a container inside a container!