Closed vincent-ferotin closed 7 years ago
Do you have the full stdout/stderr details? Have you tried running manually to see what happens?
Here's the full output of vagrant up
:
% vagrant up
Bringing machine 'default' up with 'docker' provider...
==> default: Building the container from a Dockerfile...
default: Sending build context to Docker daemon 7.68kB
default: Step 1/7 : FROM centos:6.9
default: ---> 573de66f263e
default: Step 2/7 : RUN set -x && yum update -y && yum install -y sudo python openssh-server vim && useradd vagrant --create-home --user-group && service sshd start
default: ---> Running in 2d5733359d76
default: The command '/bin/sh -c set -x && yum update -y && yum install -y sudo python openssh-server vim && useradd vagrant --create-home --user-group && service sshd start' returned a non-zero code: 139
A Docker command executed by Vagrant didn't complete successfully!
The command run along with the output from the command is shown
below.
Command: ["docker", "build", "/home/work/work/mowst/work/vagrant/docker", {:notify=>[:stdout, :stderr]}]
Stderr: The command '/bin/sh -c set -x && yum update -y && yum install -y sudo python openssh-server vim && useradd vagrant --create-home --user-group && service sshd start' returned a non-zero code: 139
Stdout: Sending build context to Docker daemon 7.68kB
Step 1/7 : FROM centos:6.9
---> 573de66f263e
Step 2/7 : RUN set -x && yum update -y && yum install -y sudo python openssh-server vim && useradd vagrant --create-home --user-group && service sshd start
---> Running in 2d5733359d76
Telling Docker only to build the image, from its dedicated directory where stands the Dockerfile, fails with following output:
% cd vagrant/docker
% docker build .
Sending build context to Docker daemon 7.68kB
Step 1/7 : FROM centos:6.9
---> 573de66f263e
Step 2/7 : RUN set -x && yum update -y && yum install -y sudo python openssh-server vim && useradd vagrant --create-home --user-group && service sshd start
---> Running in 5ec19f7893f5
The command '/bin/sh -c set -x && yum update -y && yum install -y sudo python openssh-server vim && useradd vagrant --create-home --user-group && service sshd start' returned a non-zero code: 139
Here's the full Dockerfile:
FROM centos:6.9
RUN set -x \
&& yum update -y \
&& yum install -y sudo python openssh-server vim \
#&& groupadd sudo \
#&& useradd vagrant --create-home --user-group --groups sudo,wheel
&& useradd vagrant --create-home --user-group \
&& service sshd start
#&& sed -i.bkp -e \
# 's/%sudo\s\+ALL=(ALL\(:ALL\)\?)\s\+ALL/%sudo ALL=NOPASSWD:ALL/g' \
# /etc/sudoers
RUN echo 'root:root' | chpasswd
RUN echo 'vagrant:vagrant' | chpasswd
RUN mkdir /home/vagrant/.ssh \
&& chmod 700 /home/vagrant/.ssh
ADD vagrant_keys/vagrant.pub /home/vagrant/.ssh/authorized_keys
RUN chown -R vagrant:vagrant /home/vagrant/.ssh
Does this have yum-plugin-ovl
installed? (in the image)
Since the image is built from directly from https://hub.docker.com/_/centos/ and given that I couldn't find any yum-ovl
bin in its /usr/bin
directory, I think this package is not installed. But how to be sure?
Try to yum install yum-plugin-ovl
.
Was your older setup using AUFS instead of overlay2?
Mmh, I probably don't understand how installing yum-plugin-ovl
, since I'm very new to Docker. I tried to insert a RUN yum install -y yum-plugin-ovl
in second line of the Dockerfile, but, of course, it fails with same error:
% docker build .
Sending build context to Docker daemon 20.48kB
Step 1/8 : FROM centos:6.9
---> 573de66f263e
Step 2/8 : RUN yum install -y yum-plugin-ovl
---> Running in 7f21e5af6aa1
The command '/bin/sh -c yum install -y yum-plugin-ovl' returned a non-zero code: 139
Regarding your ask about AUFS/overlay2, I couldn't answer: this is some very too advanced options for the newbe I am, sorry! I think these options are not accessible through Vagrantfile...
Hmm, it may be unrelated. I see yum-plugin
is installed by default in the centos:6.9 image.
ping @justincormack re: 4.11 failures.
I've tested all versions of CentOS images, and the result is: all 6.x failed, all 7.y.z succeeded:
modifying first Dockerfile line FROM centos:XX
:
FROM centos:6.9
# FAILSFROM centos:6.8
# FAILSFROM centos:6.7
# FAILSFROM centos:6.6
# FAILSFROM centos:7.3.1611
# WORKSFROM centos:7.2.1511
# WORKSFROM centos:7.1.1503
# WORKSFROM centos:7.0.1406
# WORKSyum plugins can be disabled by default; check if they're enabled; https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/sec-Yum_Plugins.html
the ovl plugin is needed if you want to use yum on a system using overlay file system. Possibly it's enabled by default on the 7.x versions, and disabled by default on 6.x
The problem is that running docker run -i -t centos:6.9 /bin/bash
(to examine the system) shutdowns the container immediately, given no shell -- whereas running docker run -i -t centos:7 /bin/bash
works as expected... This behavior is similar if instead I use a minimal Dockerfile as follows:
FROM centos:6.9
CMD ["/bin/bash"]
Running on Linux 4.9.30+deb9u2, docker run -i -t centos:6.9 /bin/bash
allows to connect through tty, and then:
# yum info yum
Loaded plugins: fastestmirror, ovl
base | 3.7 kB 00:00
base/primary_db | 4.7 MB 00:00
extras | 3.4 kB 00:00
extras/primary_db | 29 kB 00:00
updates | 3.4 kB 00:00
updates/primary_db | 1.9 MB 00:00
Installed Packages
Name : yum
Arch : noarch
Version : 3.2.29
Release : 81.el6.centos
Size : 4.6 M
Repo : installed
From repo : CentOS
Summary : RPM package installer/updater/manager
URL : http://yum.baseurl.org/
License : GPLv2+
Description : Yum is a utility that can check for and automatically download and
: install updated RPM packages. Dependencies are obtained and downloaded
: automatically, prompting the user for permission as necessary.
(Updated issue description with some material from above comments)
So, yes, on a default (4.9) Debian Stretch kernel, it all seems to work.
Client:
Version: 17.06.0-ce
API version: 1.30
Go version: go1.8.3
Git commit: 02c1d87
Built: Fri Jun 23 21:17:22 2017
OS/Arch: linux/amd64
Server:
Version: 17.06.0-ce
API version: 1.30 (minimum version 1.12)
Go version: go1.8.3
Git commit: 02c1d87
Built: Fri Jun 23 21:16:12 2017
OS/Arch: linux/amd64
Experimental: false
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 3
Server Version: 17.06.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfb82a876ecc11b5ca0977d1733adbe58599088a
runc version: 2d41c047c83e09a6d61d464906feb2a2f3c52aa4
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.0-3-amd64
Operating System: Debian GNU/Linux 9 (stretch)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.957GiB
Name: debian-2gb-sfo2-01
ID: DNN3:J4VI:DJUB:JF7Q:AG74:NZBO:NKFM:TDEQ:KC4H:QHTS:QSAO:YG3H
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
uname -a
4.9.0-3-amd64 #1 SMP Debian 4.9.30-2 (2017-06-12) x86_64 GNU/Linux
running:
docker build -t repro-58 -<<EOF
FROM centos:6.9
RUN yum update -y
CMD ["/bin/bash"]
EOF
Works correctly;
Is there a specific reason you're switching to a different kernel>?
Well, my distro is Debian testing, and new 4.11 kernel (https://packages.debian.org/buster/linux-image-4.11.0-1-amd64) was introduced recently. Do you think I should also fill a bug report on Debian bugtracker?
Ah! Sorry, missed that you were on testing. Would be good to have more clarity as to what the root cause is; could be due to changes in overlayfs in the kernel.
ping @rn @justincormack anything that came up in testing for LinuxKit?
This is probably related to the changes in vsyscall
linking in the 4.11 kernel. Try booting the kernel with vsyscall=emulate
and see if it helps. This does run ok under the linuxkit
4.11 kernel config without issues, so it is to do with the config.
cc @ijc
Hi, specifying this command in /etc/default/grub
:
GRUB_CMDLINE_LINUX_DEFAULT="vsyscall=emulate"
permits to successfully run docker build .
and docker run -t -i centos:6.9 /bin/bash
! Thanks a lot :-)
Perhaps we should add a check to the check-config.sh script https://github.com/moby/moby/blob/master/contrib/check-config.sh
@justincormack wdyt?
@vincent-verotin perhaps you're interested in contributing, and opening a pull request for that?
There already is one...
On 14 Jul 2017 8:38 am, "Sebastiaan van Stijn" notifications@github.com wrote:
Perhaps we should add a check to the check-config.sh script https://github.com/moby/moby/blob/master/contrib/check-config.sh
@justincormack https://github.com/justincormack wdyt?
@vincent-verotin perhaps you're interested in contributing, and opening a pull request for that?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/docker/for-linux/issues/58#issuecomment-315293255, or mute the thread https://github.com/notifications/unsubscribe-auth/AAdcPHW7ikpc6DQoFhxZusTdXH6F7Ybsks5sNxrhgaJpZM4OUMgJ .
Oh, ha!
I think it's safe to close this issue then; nothing else to be done 👍
Just to be sure to fully understand, which behavior does additional check in check_config.sh
introduce? In fine, does Docker will run out-of-the-box for Linux Kernel 4.11 with Centos 6.x images, or the configuration of the kernel pointed by justincormack will be needed -- forever?
Debian has decided to enable this option in their 4.11 kernel, although did this before and reverted it. So you will need this option forever, yes. I doubt that Red Hat will enable it on their kernels, not sure about other vendors.
The reason is that it is a security risk enabling it, and an environment running only modern code does not need it. It is fairly unusual for Linux distros to break old applications, but the kernel boot option does give you an override.
On 15 Jul 2017 10:16 am, "vincent-ferotin" notifications@github.com wrote:
Just to be sure to fully understand, which behavior does additional check in check_config.sh introduce? In fine, does Docker will run out-of-the-box for Linux Kernel 4.11 with Centos 6.x images, or the configuration of the kernel pointed by justincormack will be needed -- forever?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/docker/for-linux/issues/58#issuecomment-315521640, or mute the thread https://github.com/notifications/unsubscribe-auth/AAdcPKDDTBs3P7JhYlDhPwdBNJnasSxnks5sOINxgaJpZM4OUMgJ .
Thanks for the clarification!
Hi @vincent-ferotin , I noticed in the comment here that you were able to fix this issue by updating the /etc/default/grub file. How can this be done in a dockerfile? Do you have an example/gist of this that I could see?
@jacobmetrick you can't. you have to change it on the host system
To debug error 139, break up your RUN line into multiple RUN lines so you know which command fails. In my case i had to COPY a file into the instance before trying to RUN it:
COPY blueprint/src/main/python/setup.py setup.py
RUN python setup.py
Hi All,
We have a Docker file for Clam-av (anti virus scan for uploaded files). It is build on top of spring boot API. Docker File has the following commands:
FROM mkodockx/docker-clamav:latest
MAINTAINER lokori <antti.virtanen@iki.fi>
#RUN echo "deb http://ftp.de.debian.org/debian jessie-backports main" >> /etc/apt/sources.list
RUN apt-get update && apt install -t jessie-backports -y openjdk-8-jdk ca-certificates-java
# Set environment variables.
ENV HOME /root
# Get the JAR file CMD mkdir /var/clamav-rest
COPY target/clamav-rest-1.0.2.jar /var/clamav-rest/
# Define working directory.
WORKDIR /var/clamav-rest/
# Open up the server
EXPOSE 8082
ADD bootstrap.sh /bootstrap2.sh
ENTRYPOINT ["/bootstrap2.sh"]
============================= From the above lines mkdir is not creating directory and it is not showing any error messages. Please share your experiences if you have faced similar issues. I am using the following Linux distribution. SMP Debian 3.16.56-1+deb8u1 (2018-05-08) x86_64 GNU/Linux
Thanks in advance
In case someone stumbles on this closed issue, here's quick howto:
Description: centos:6 docker image fails to start, no output given.
Workaround: append vsyscall=emulate
to line GRUB_CMDLINE_LINUX_DEFAULT
in your /etc/default/grub
. E.g.
GRUB_CMDLINE_LINUX_DEFAULT="consoleblank=0 systemd.show_status=true elevator=noop console=tty1 console=ttyS0 vsyscall=emulate"
then update grub
update-grub
and reboot host machine
reboot
Expected behavior
docker CE 17.06 should work same, for CentoS 6.x images, with 4.9 or 4.11 Linux kernel version, at
vagrant up
,docker build .
ordocker run -t -i <image> /bin/bash
. This failure is encountered only with 6.x branch of CentOS, and 4.11 branch of Linux together (it works fine with kernel at 4.9, or kernel at 4.11 with 7.x branch of CentOS).Actual behavior
Using Docker through Vagrant, after updating distribution (Debian testing, today), for a CentOS 6.9 image, the latter now returns following error:
default: The command '/bin/sh -c set -x && yum update -y [...] ' returned a non-zero code: 139
. This error is also reported in case of use of sole Dockerfile, withdocker build .
. Also, using direct images from https://hub.docker.com/_/centos/ (instead of building our own images from them) withdocker run -t -i <image> /bin/bash
fails to give user a shell.Steps to reproduce the behavior
Here's a minimal Dockerfile for which error occured:
vagrant up
anddocker build .
fail with error:does not give user a shell inside the container.
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.)
Linux hostname 4.11.0-1-amd64 #1 SMP Debian 4.11.6-1 (2017-06-19) x86_64 GNU/Linux
Linux hostname 4.9.0-3-amd64 #1 SMP Debian 4.9.30-2+deb9u2 (2017-06-26) x86_64 GNU/Linux
yum-plugin-ovl
is installed on centos:6.x image.