Open xgenvn opened 6 years ago
Hi @xgenvn, have you fixed that issue? I'm at the same point but it occurs when docker is ran by a pipeline using gitlab ci/cd... Not sure if in my case it is a docker or a gitlab related issue
@AdrianAntunez I installed specific version: 17.09.1~ce-0~ubuntu
and it's working fine now.
@xgenvn I will give it a try ASAP and I will let you know if it worked for me or not. Thx!!
@AdrianAntunez ran into the same issue, it is working with this configuration for me:
image: docker:17.09
services: - docker:17.09-dind
Same issue here, I'm running:
Version: 17.12.1-ce API version: 1.35 Go version: go1.9.4 Git commit: 7390fc6 Built: Tue Feb 27 22:17:40 2018 OS/Arch: linux/amd64
@philippbussche hey philip, so only the docker was a problem is it? Am also facing the same problem
My version
`Client: Version: 17.12.0-ce API version: 1.35 Go version: go1.9.2 Git commit: c97c6d6 Built: Wed Dec 27 20:03:51 2017 OS/Arch: darwin/amd64
Server: Engine: Version: 17.12.0-ce API version: 1.35 (minimum version 1.12) Go version: go1.9.2 Git commit: c97c6d6 Built: Wed Dec 27 20:12:29 2017 OS/Arch: linux/amd64 Experimental: true `
For me Docker Hello World is working without a issue though.
I ran into similar issue found a temp fix on another place. Temp fix: sudo mkdir /sys/fs/cgroup/systemd sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
Hello @dinar-dalvi thanks a lot! Do i need to do this on the route directory of the Server? or in the app folder(for example: meteor/hello-app/
@dinar-dalvi I tried this before, however the systemd is existed and mounted. Downgrading seems to fix the issue.
@xgenvn: downgrading the container to which version?
@jkbaseer you might want to try
sudo apt remove docker-ce
sudo apt install docker-ce=17.09.1~ce-0~ubuntu
Am i doing it rightly?
(Ps: Also i have few containers am using for other project, will that be removed as well? or is there a single command only to downgrade? )
On devuan/ascii this worked (over 18.03):
sudo apt-get install docker-ce=17.09.1~ce-0~debian
@jkbaseer no, I was speaking of Ubuntu command line, I'm not sure on Mac.
Hey Guys, Sorry I gave up of this trying even after installing latest Ubuntu ce. just created a new VM and used my container. And the app is back thankfully :)
It took so much to solve this Cpgroup. Appreciate all your help!
On 29 Mar 2018, at 9:23 am, Anh Tu Nguyen notifications@github.com wrote:
@jkbaseer no, I was speaking of Ubuntu command line, I'm not sure on Mac.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
In my case the runner was host in AWS, I tried everything and nothing worked so I decided to change the AMI (it's the virtual image which includes the SO). Swaping from amazon linux v1 to amazon linux v2 apparently solved my issue. Anyway I don't know why :confused:
I'm facing this too with Docker version 18.03.0-ce, build 0520e24
Solved by regressing to previous version of docker ce, as mentioned by @xgenvn
I've just tried updating to the newest version again and it seems fixed for me in Docker version 18.03.1-ce, build 9ee9f40
Still facing this issue with 18.03.1~ce-0~ubuntu
.
Ran into this as well running under root terminal/normal terminal and using 18.03.1-ce
.
Confirmed works when downgrade to 17.09.1~ce-0~ubuntu
> docker version
Client:
Version: 18.03.1-ce
API version: 1.37
Go version: go1.9.5
Git commit: 9ee9f40
Built: Thu Apr 26 07:17:20 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.03.1-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.5
Git commit: 9ee9f40
Built: Thu Apr 26 07:15:30 2018
OS/Arch: linux/amd64
Experimental: false
> docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 1
Server Version: 18.03.1-ce
Storage Driver: vfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.12.14-aufs
Operating System: Ubuntu 16.04 LTS (containerized)
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 62.88GiB
Name: antlet13
ID: KJZ3:L22I:HNCT:7O2X:VF6S:TRVG:ZCTT:IPR7:BHZR:AB6I:MYUY:KIBJ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
> uname -a
Linux antlet13 4.12.14-aufs #7 SMP Tue Jan 9 00:08:41 -00 2018 x86_64 x86_64 x86_64 GNU/Linux
> ls /sys/fs/cgroup/
blkio cpu cpuacct cpu,cpuacct cpuset devices freezer hugetlb memory net_cls net_cls,net_prio perf_event pids systemd
So basically, any image that expects systemd sysfs entries fails to start with this error correct, if you are running on a non-systemd based Linux system? Would others here concur with that statement?
@CpuID how do I know "the image expects systemd sysfs entries" ?
@CpuID how do I know "the image expects systemd sysfs entries" ?
A difficult assumption to make ahead of time. The answer might need to be Docker passing through a mocked set of sysfs entries to the container sysfs, when the host OS doesn't have them (due to a lack of systemd)? Not 100% sure if thats wise, or might lead to more confusion... We hit the issue starting the official Elasticsearch Docker images, which I don't believe actually use systemd, but maybe they call something that expects systemd to be configured at least? shrug
got this during a docker-compose up for concourse's dev compose config, the concourse-db (postgres image), running inside docker-machine on a mac with virtualbox
it works again once I completely destroy the docker machine and re-create it
Creating network "local-concourse_default" with the default driver
Creating local-concourse_concourse-db_1 ...
Creating local-concourse_concourse-db_1 ... error
ERROR: for local-concourse_concourse-db_1 Cannot start service concourse-db: cgroups: cannot find cgroup mount destination: unknown
ERROR: for concourse-db Cannot start service concourse-db: cgroups: cannot find cgroup mount destination: unknown
Encountered errors while bringing up the project.
docker version
Client:
Version: 18.06.0-ce
API version: 1.38
Go version: go1.10.3
Git commit: 0ffa825
Built: Wed Jul 18 19:04:39 2018
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.0-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: 0ffa825
Built: Wed Jul 18 19:13:39 2018
OS/Arch: linux/amd64
Experimental: false
docker info
Containers: 2
Running: 1
Paused: 0
Stopped: 1
Images: 3
Server Version: 18.06.0-ce
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 31
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d64c661f1d51c48782c9cec8fda7604785f93587
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.93-boot2docker
Operating System: Boot2Docker 18.06.0-ce (TCL 8.2.1); HEAD : 1f40eb2 - Thu Jul 19 18:48:09 UTC 2018
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.858GiB
Name: snap-dev
ID: S3SW:XQYT:MUTK:OZ45:HKRO:J65H:F6WE:BDR3:5TPD:YECG:QJDU:2GKZ
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
ls /sys/fs/cgroup/
blkio/ cpuacct/ devices/ hugetlb/ net_cls/ perf_event/
cpu/ cpuset/ freezer/ memory/ net_prio/ pids/
uname -a
Linux snap-dev 4.9.93-boot2docker #1 SMP Thu Jul 19 18:29:50 UTC 2018 x86_64 GNU/Linux
I too can confirm downgrading to 17.09.1~ce-0~ubuntu
solves this issue for me. Every newer Docker CE version from the stable
channel version produces the error message.
I'm running Docker inside an OpenVZ container (need to deal with it sadly) so this might contribute to the problem. The cgroup filesystems seem to be mounted but I don't trust a thing that this OpenVZ container says anymore.
I'm omitting information about the machine because it's close to what others have posted already, apart from the virtualisation maybe. Let me know if you need further information, I'll add it then.
We having the same issue with Docker version 18.03.1-ce running on AWS ECS with Concourse in docker.
@kgrodzicki me too.
In ecs. Task(is mere DockerContainer) repeat starting and ending over many times.
the cause that Task stop is that CannotStartContainerError: API error (500): cgroups: cannot find cgroup mount destination: unknown
I'm running Docker inside LXC containers.
Ubuntu 18.04 LXC container can run docker (Docker version 18.06.1-ce, build e68fc7a) without problems.
Alpine 3.7 LXC container can not run docker (Docker version 18.06.1-ce, build 3b19bc6ba8).
My output:
Building app
Step 1/7 : FROM node:8
---> 6f62c0cdc461
Step 2/7 : WORKDIR /usr/src/app
---> Using cache
---> 66cd8fd0bcfb
Step 3/7 : COPY package*.json ./
---> Using cache
---> d7dce2de9ad6
Step 4/7 : RUN npm install
---> Running in 067b8a8620d6
ERROR: Service 'app' failed to build: cgroups: cannot find cgroup mount destination: unknown
Both containers were launch with security options (-c security.nesting=true -c security.privileged=true)
We have same issue with ecs ami linux v2
same issue here on Void Linux (uses runit not systemd).
$ uname -a
Linux bradbury 4.18.9_1 #1 SMP PREEMPT Thu Sep 20 05:49:31 UTC 2018 x86_64 GNU/Linux
$ docker version
Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.11
Git commit:
Built: Thu Aug 30 08:12:28 2018
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.11
Git commit: v18.06.1-ce
Built: Thu Aug 30 08:12:28 2018
OS/Arch: linux/amd64
Experimental: false
$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
d1725b59e92d: Pull complete
Digest: sha256:0add3ace90ecb4adbf7777e9aacf18357296e799f81cabc9fde470971e499788
Status: Downloaded newer image for hello-world:latest
docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
ERRO[0001] error waiting for container: context canceled
I have the same issue on ubuntu server
We having the same issue with Docker version 18.03.1-ce running on AWS ECS with Concourse in docker.
Same here...
Its appear in swarm mode only, in other case no errors for me.
fix/work-around for Void Linux is here: #9811 (comment)
I guess the issue arises from the dependency on a systemd
cgroup. (NOTE that Void Linux uses runit
, not systemd
.)
In my case the problem was caused by different /sys/fs/cgroup hierarchies on the host and the LXC containers where Docker runs. On the host all cgroup controllers are mounted separately. In the containers systemd comounted cpu,cpuacct and net_cls,net_prio. I fixed the problem by adding JoinControllers=
to /etc/systemd/system.conf in the LXC containers.
$ docker --version Docker version 18.09.1, build 4c52b90
$ docker run hello-world Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 1b930d010525: Pull complete Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535 Status: Downloaded newer image for hello-world:latest docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
$ uname -a Linux host 4.4.0-141-generic #167-Ubuntu SMP Wed Dec 5 10:40:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Same issue here. Using Amazon AMI w/ Docker 18.06.1-ce
I'm having the same issue. I initially started with the latest and got this error. Then I tried to remove and step down a version until it started working.
The latest I could use without error was:
Misc Details:
> uname -a
Linux antlet25 4.10.13-aufs #3 SMP Fri Aug 11 16:57:44 PDT 2017 x86_64 x86_64 x86_64 GNU/Linux
> lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04 LTS
Release: 16.04
Codename: xenial
docker:18.09.4-rc1-dind seems to work for me. Anything lower doesn't work till 17.09.1
Still not working for me with the 18.09.6:
mo@ubuntu:~$ uname -a
Linux ubuntu 4.2.8 #2 SMP Thu Apr 25 04:20:11 CST 2019 armv7l armv7l armv7l GNU/Linux
mo@ubuntu:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial
mo@ubuntu:~$ docker --version
Docker version 18.09.6, build 481bc77
mo@ubuntu:~$ docker run hello-world
docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
ERRO[0003] error waiting for container: context canceled
Me too, same problem.
root@ubuntu-hassio:~# uname -a
Linux ubuntu-hassio 4.2.8 #2 SMP Thu Apr 25 07:54:38 CST 2019 armv7l armv7l armv7l GNU/Linux
root@ubuntu-hassio:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
root@ubuntu-hassio:~# docker --version
Docker version 18.09.6, build 481bc77
root@ubuntu-hassio:~# docker run hello-world
docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
ERRO[0002] error waiting for container: context canceled
root@ubuntu-hassio:~#
@xgenvn
I also tried
sudo apt remove docker-ce
sudo apt install docker-ce=17.09.1~ce-0~ubuntu
but the image 17.09.1 was not found.
I have partially analyzed the problem and have a workaround that works for me. I reproduced the problem on Docker 18.09.5 and am now running an nginx web server container with it.
The error appears to be triggered when not all container groups listed in /proc/self/cgroup having a matching entry in /proc/self/mountinfo. Why this should be a fatal error and why the error message could not mention at least the first unmatched container are things I have not yet analyzed, so I do not currently have a fix to submit to docker-ce, and I would welcome someone who knows Docker better than I do developing such a fix, although I may try to do so if nobody beats me to it.
Judging from the comments here and my experience, I guess the abort can happen if you run Docker containerd in an Ubuntu 19.04 lxc container from a host Linux-based operating system that does not use systemd.
On my system, the workaround was to do the following mount commands as superuser in the container in which I wanted to run containerd:
mount -t cgroup -o net_cls none /sys/fs/cgroup/net_cls mount -t cgroup -o cpuacct none /sys/fs/cgroup/cpuacct mount -t cgroup -o cpu none /sys/fs/cgroup/cpu mount -t cgroup -o net_prio none /sys/fs/cgroup/net_prio
The exact mounts you might need might be different, so I have attached a Go program that should print out the mount commands that might make Docker work in your container. I attached it as a .txt file to appease Github. You should be able to run it thusly:
mv cgroup-mounts.go.txt cgroup-mounts.go go run cgroup-mounts.go
The attached program is my attempt at adapting of the failing Go code in Docker containerd so that it instead prints the mount commands that make "docker run ..." work for me, at least in the case of trying to run an nginx server.
This rest of this message is just notes for those of you who would like to see where this came from in the docker code.
This description is applies equally to the result of doing either of the following to get a recent version of the containerd source code:
apt-get source docker.io ...or... git clone https://github.com/docker/docker-ce.git
From there, if you search for the error message with... grep -r 'cannot find cgroup mount destination' ...which will show you that that error string is defined here:
docker.io-18.09.5/components/engine/vendor/github.com/containerd/cgroups/errors.go: ErrNoCgroupMountDestination = errors.New("cgroups: cannot find cgroup mount destination")
Doing "grep -r ErrNoCgroupMountDestination reveals that the only other file that actually uses that error identifier is docker.io-18.09.5/components/engine/vendor/github.com/containerd/cgroups/utils.go . The program I attached is adapted the relevant functions in that file and paths.go in the same directory.
Anyhow, I hope this information is helpful. I would be interested in any reports of whether this workaround works for any of you who are still experiencing this problem. ...
Same issue
root@array:/service# docker --version
Docker version 18.09.2, build 6247962
root@array:/service# uname -a
Linux array 4.4.157 #2 SMP Fri Sep 21 00:36:59 CDT 2018 x86_64 Intel(R) Celeron(R) CPU N3150 @ 1.60GHz GenuineIntel GNU/Linux
root@array:/service# mountpoint /sys/fs/cgroup/systemd
/sys/fs/cgroup/systemd is a mountpoint
root@array:/service#
This bug staying around for a pretty while, look like dev-folks waiting for cooldown to close it as "won't fix" :) It's no good to leave technical debt all around.
Looks like this "oh, i shit my pants" bug is some kind of https://github.com/containerd/containerd/tree/master/vendor/github.com/containerd/cgroups related. At least roots of this error grows here.
I was able to get other containers up but this error is a show stopper for me:
Creating docker-registry_registry_1 ... error
ERROR: for docker-registry_registry_1 Cannot start service registry: cgroups: cannot find cgroup mount destination: unknown
ERROR: for registry Cannot start service registry: cgroups: cannot find cgroup mount destination: unknown
ERROR: Encountered errors while bringing up the project.
I am on:
NAME="Amazon Linux AMI"
VERSION="2018.03"
ID="amzn"
ID_LIKE="rhel fedora"
VERSION_ID="2018.03"
PRETTY_NAME="Amazon Linux AMI 2018.03"
ANSI_COLOR="0;33"
CPE_NAME="cpe:/o:amazon:linux:2018.03:ga"
HOME_URL="http://aws.amazon.com/amazon-linux-ami/"
I ran into similar issue found a temp fix on another place. Temp fix: sudo mkdir /sys/fs/cgroup/systemd sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
Your hack just worked fine for me. I'm running on a Devuan, without systemd...
Hi,
Solution from this post https://forum.linuxconfig.org/t/how-to-install-docker-on-fedora-31-linuxconfig-org/3605/3 worked for me on Fedora 31 with Docker 19.03.5
and kernel 5.3.11-300.fc31.x86_64
:
$ sudo dnf install -y grubby
$ sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"
$ sudo reboot
Same issue on Alpine LXD container:
~ # docker run hello-world
docker: Error response from daemon: cgroups: cannot find cgroup mount destination: unknown.
ERRO[0002] error waiting for container: context canceled
~ # uname -a
Linux relieved-wolf 4.17.10-041710-generic #201807260825 SMP Thu Jul 26 12:28:11 UTC 2018 x86_64 Linux
~ # cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.11.0
PRETTY_NAME="Alpine Linux v3.11"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
On host:
~ $ lxc config show relieved-wolf
architecture: x86_64
config:
image.architecture: amd64
image.description: Alpine 3.11 amd64 (20200118_13:00)
image.os: Alpine
image.release: "3.11"
image.serial: "20200118_13:00"
image.type: squashfs
security.nesting: "true"
security.privileged: "true"
volatile.base_image: ee283cd0200de9f41da2cb4223f4c09dce598fbe7ee387b3b05e465b5e6876c0
volatile.eth0.host_name: macc3b3a845
volatile.eth0.hwaddr: 00:16:3e:95:81:44
volatile.eth0.last_state.created: "false"
volatile.idmap.base: "0"
volatile.idmap.current: '[]'
volatile.idmap.next: '[]'
volatile.last_state.idmap: '[]'
volatile.last_state.power: RUNNING
devices: {}
ephemeral: false
profiles:
- default
- docker
stateful: false
description: ""
Hello what worked for me was start the service and set to start with the system. How root:
systemctl start docker systemctl enable docker
sudo mkdir /sys/fs/cgroup/systemd
sudo mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
but of cause the mount is gone after restart
Expected behavior
should run normally.
Actual behavior
Steps to reproduce the behavior
Install latest stable or edge builds (17.12 or 18.01), using: https://docs.docker.com/install/linux/docker-ce/ubuntu/
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.)
AppArmor is also deactivated. Running under root terminal/normal terminal returns same issue. Docker pull is working. Confirmed working under
17.09.1~ce-0~ubuntu