Open pierrediancourt opened 7 years ago
I don't think it's CoreOS, but the Docker version. As far as I know Docker API changed a lot from 1.11 to 1.12. You can can fork the repo and install Docker 1.11.2 on the Jenkins image here https://github.com/stefanprodan/jenkins/blob/master/Dockerfile#L29
Thanks a lot i'll explore that path and keep you informed here
Running docker -v
on my freshly built image returns this
Docker version 1.11.2, build b9f10c9
To build the image i cloned the url you linked me, and edited the docker-engine package apt-get is installing (docker-engine=1.11.2-0~jessie which is the only 1.11.2 version i could download, see https://apt.dockerproject.org/repo/dists/debian-jessie/main/filelist)
As a reminder, my host (CoreOS) docker version is : Docker version 1.12.3, build 6b644e
which is the same but not exactly, refering to the build number...
Sadly i can't get closer but i think it's close enough to guess that the problem is somewhere else.
What about the docker.sock permissions ? Might be a good lead no ?
I had to chmod 777 /run/docker.sock in Fedora 23 then it worked. I found this out by logging into the container
docker exec -it jenkins-ci /bin/bash
and running docker info
which couldn't connect to the docker daemon
in the down.sh why would you run rm -rf /home/$(whoami)/jenkins_home ?
Thanks for your contribution sofuca.
So you executed something like docker exec -u root ${NAME} /bin/chmod -v a+s $(which docker)
(untested) after running the jenkins container ? Because there's no sudo in the container and running as jenkins user you can't use chmod on the .sock file, no rights for that.
I really advice you both to read the following post and moreover, to watch the video in it to fully understand the risks of the container we're discussing about. I'm currently wondering if i shouldn't look for another way to fulfill my CI objectives (not using a docker.sock passed to a docker container). https://www.lvh.io/posts/dont-expose-the-docker-socket-not-even-to-a-container.html
In my opinion, the part of the down.sh script you're talking about is just here to let stefanprodan easily reset his environment for testing his work on the container.
Hey, sorry for the confusion, you need to change the permissions on the docker.sock on the host, not inside the Jenkins container.
Yeah i had and hesitation about it. I'm not sure i'll do that but thanks for your explanation.
Yes the down.sh resets everything on host, don't use that if you want to keep your data. Regarding the permissions on the docker sock, it shouldn't be a security issue if your CI server is behind a firewall. If your Jenkins server has a public IP then the docker socket is the least of your problems.
Hi,
I ran your docker container this way
As my docker version is Docker version 1.11.2, build bac3bae and running on CoreOS stable, i checked there https://docs.docker.com/engine/reference/api/docker_remote_api/ and have guessed i should specify '1.23' for the environment variable DOCKER_API_VERSION.
When running the build job from the Pipeline sample you provide by your groovy script, i can see this in the Console Output
I'm getting nothing relevant using
docker logs jenkins
.I can assure you that
/var/run/docker.sock
is a valid path as thels -lah /var/run/d*
command returns me thisSo i guess CoreOS is not different enough from other systems to cause the issue, don't you think ? Moreover there's this
ls -lah /var/run/d*
command result executed in the precedently ran container ( passed thanks to this commanddocker exec -it jenkins /bin/bash
)docker -v
displaysDocker version 1.12.3, build 6b644ec
from within the container