Closed ximarx closed 4 years ago
I have also seen this on Linux Ubuntu running docker-ce=17.12.0~ce-0~ubuntu - reverting back to the RC4 (17.12.0~ce~rc4-0~ubuntu) helped, as in we have not experienced the problem with this version. We had the pattern of doing docker-compose up, run some tests, docker-compose down to clean up, and it would just hang until it gave up with "An HTTP request took too long to complete ...". I manually did a docker inspect on the container not shutting down and it just hung.
It looks related to this https://github.com/moby/moby/issues/35933.
Is there something I can provide to help to identify the issue?
Same issue here. It creates serious problems because docker-compose
commands rely on docker inspect
working.
Here are the things I've tried to fix it, none of which worked:
127.0.0.1 localunixsocket.local
127.0.0.1 localunixsocket
127.0.0.1 localunixsocket.lan
ping localhost
worksdocker container ls
(and other docker commands) works instantlytty
in my docker-compose.yml or Dockerfile filesOS: macOS High Sierra version 10.13.3 (17D47) Processor: 3.4 GHz Intel Core i5 Memory: 40 GB 2400 MHz DDR4 (I'm not using anywhere near that much)
My docker diagnostics ID is 0F286399-29FA-49AB-A3E7-669DB39AD08B
Docker for Mac: version: 17.12.0-ce-mac49 (d1778b704353fa5b79142a2055a2c11c8b48a653) macOS: version 10.13.2 (build: 17C205) logs: /tmp/0F286399-29FA-49AB-A3E7-669DB39AD08B/20180128-105656.tar.gz [OK] db.git [OK] vmnetd [OK] dns [OK] driver.amd64-linux [OK] virtualization VT-X [OK] app [OK] moby [OK] system [OK] moby-syslog [OK] kubernetes [OK] env [OK] virtualization kern.hv_support [OK] slirp [OK] osxfs [OK] moby-console [OK] logs [OK] docker-cli [OK] menubar [OK] disk
I discovered the issue because docker-compose
would stop working randomly:
$ docker-compose --verbose logs
compose.config.config.find: Using configuration files: ./docker-compose.yml
docker.auth.find_config_file: Trying paths: ['/Users/narthur/.docker/config.json', '/Users/narthur/.dockercfg']
docker.auth.find_config_file: Found file at path: /Users/narthur/.docker/config.json
docker.auth.load_config: Couldn't find 'auths' or 'HttpHeaders' sections
docker.auth.parse_auth: Auth data for {0} is absent. Client might be using a credentials store instead.
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/version HTTP/1.1" 200 566
compose.cli.command.get_client: docker-compose version 1.18.0, build 8dd22a9
docker-py version: 2.6.1
CPython version: 2.7.12
OpenSSL version: OpenSSL 1.0.2j 26 Sep 2016
compose.cli.command.get_client: Docker base_url: http+docker://localunixsocket
compose.cli.command.get_client: Docker version: KernelVersion=4.9.60-linuxkit-aufs, Components=[{u'Version': u'17.12.0-ce', u'Name': u'Engine', u'Details': {u'KernelVersion': u'4.9.60-linuxkit-aufs', u'Os': u'linux', u'BuildTime': u'2017-12-27T20:12:29.000000000+00:00', u'ApiVersion': u'1.35', u'MinAPIVersion': u'1.12', u'GitCommit': u'c97c6d6', u'Arch': u'amd64', u'Experimental': u'true', u'GoVersion': u'go1.9.2'}}], Arch=amd64, BuildTime=2017-12-27T20:12:29.000000000+00:00, ApiVersion=1.35, Platform={u'Name': u''}, Version=17.12.0-ce, MinAPIVersion=1.12, GitCommit=c97c6d6, Os=linux, Experimental=True, GoVersion=go1.9.2
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={u'label': [u'com.docker.compose.project=sudocker', u'com.docker.compose.oneoff=False']})
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/containers/json?all=1&limit=-1&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Dsudocker%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D&trunc_cmd=0&size=0 HTTP/1.1" 200 4338
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 3 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- (u'6c927f8d3ce359352f193a0d85b7bafc8d9a2ab3f83fd590e2617c6c010a65bd')
ERROR: compose.cli.errors.log_timeout_error: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
It looks like the issue has been solved in containerd version included in docker for Mac 18.02.0 rc1 ( the last version shipped in the edge channel)
@ximarx Not familiar with containerd. Does that mean this fix is likely to be coming to Docker stable soon? If so, how soon should I expect it? Or should I be thinking about switching to the edge release for a while?
I also reproduced the issue in Version 18.02.0-ce-rc2-mac51 (22446)
so... no luck
Also seeing this issue, any updates?
@sashako @simonellefsen You should thumb up the message at the top of this thread, because issues can be sorted by how many thumb-up reactions they've received.
Me too, this issue has been troubling me for a few weeks now.
I have the same issue on Ubuntu:
~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[...]
e025b88fb98f localhost:5000/tfdocker "/bin/bash /tfdocker…" 2 weeks ago Created tvoeglu_tensorboard_6008
[...]
~$ docker inspect e025
[HANGS FOREVER!]
^C
~$ docker rm e025
[HANGS FOREVER!]
^C
As you can see this is not only about inspect
but also rm
After experiencing this issue for a while, at least for me I can say that it most commonly occurs after I've let my mac sleep for a while with Docker machines running. It certainly isn't the only time it happens, though.
I can provide more anecdotal evidence that it seems correlate with the Mac being asleep. Also, for me, it seems to only happen (or happen most frequently) to containers that have healthchecks defined. It's highly annoying though, and it wasn't happening with older Docker versions.
For us, this is definitely not correlated with sleep. For us it runs on a server 24/7. Seems to correlate more with CTRL-C aborting while starting a container. These containers that remain in the "created" state seem to be the ones that hang. Can anyone confirm this?
docker inspect
hangs for container in swarm mode, docker 18.02.0-ce
@su-narthur did you ever find a solution to this?
@julesterrien No, I haven't. Since I use Docker as my developer environment but not in production, it's only an annoyance for me. I reworked my Docker processes to allow me to restart Docker and spin my containers back up as quickly as possible by migrating as much as I could out of my containers' startup scripts and into a script I only use on first-run and when I need to. Hoping the Docker team fixes the issue soon...
Folks, according to @thomasleveil, this is a bug specific to Docker 17.12.0 and Docker 18.01.0. The solution: downgrade to 17.09, or upgrade to Docker 17.12.1
I've been able to get a setup running by boosting up the compose timeout to 200
before any up
command. For eg: COMPOSE_HTTP_TIMEOUT=200 docker-compose up
Running: Version 18.03.1-ce-mac65 (24312)
Hey guys, i have the exactly same issue on: 18.03.1~ce-0~ubuntu
In which version is this fixed?
@gcommit I'm running 18.03.1-ce-mac65 (24312) and it seems like I haven't experienced the issue in a while, but it's hard to know for sure since the issue is so intermittent.
@gcommit @su-narthur I see this issue on 18.03.1-ce-mac65 (24312) OS X El Capitan 10.11.6 (15G21013)
for me it is on Ubuntu 16.04.4 LTS
I've been getting this off and on since January of this year. But not on Mac, on Ubuntu. First on 16.04, then on 17.04, 17.10 and now on 18.04 - and my verbose output hangs at the exact same spot as @su-narthur 's, during container inspection.
docker-compose --verbose run playbook -i inventory/global/ playbooks/dev-jasper-reports.yml
compose.config.config.find: Using configuration files: ./docker-compose.yml
docker.utils.config.find_config_file: Trying paths: ['/home/crouth/.docker/config.json', '/home/crouth/.dockercfg']
docker.utils.config.find_config_file: Found file at path: /home/crouth/.docker/config.json
docker.auth.load_config: Found 'auths' section
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.24/version HTTP/1.1" 200 543
compose.cli.command.get_client: docker-compose version 1.21.2, build a133471
docker-py version: 3.3.0
CPython version: 3.6.5
OpenSSL version: OpenSSL 1.0.1t 3 May 2016
compose.cli.command.get_client: Docker base_url: http+docker://localhost
compose.cli.command.get_client: Docker version: Platform={'Name': ''}, Components=[{'Name': 'Engine', 'Version': '18.06.0-ce', 'Details': {'ApiVersion': '1.38', 'Arch': 'amd64', 'BuildTime': '2018-07-18T19:07:56.000000000+00:00', 'Experimental': 'false', 'GitCommit': '0ffa825', 'GoVersion': 'go1.10.3', 'KernelVersion': '4.15.0-30-generic', 'MinAPIVersion': '1.12', 'Os': 'linux'}}], Version=18.06.0-ce, ApiVersion=1.38, MinAPIVersion=1.12, GitCommit=0ffa825, GoVersion=go1.10.3, Os=linux, Arch=amd64, KernelVersion=4.15.0-30-generic, BuildTime=2018-07-18T19:07:56.000000000+00:00
compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('jasperreportsservers_default')
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.24/networks/jasperreportsservers_default HTTP/1.1" 404 61
compose.cli.verbose_proxy.proxy_callable: docker inspect_network <- ('jasperreports-servers_default')
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.24/networks/jasperreports-servers_default HTTP/1.1" 200 862
compose.cli.verbose_proxy.proxy_callable: docker inspect_network -> {'Attachable': True,
'ConfigFrom': {'Network': ''},
'ConfigOnly': False,
'Containers': {'35ef0763b3a38a776409ef8d426f9fd1eb71549cb13c4166c638d33b7c77bd34': {'EndpointID': '46aaa4306520341b9d1f3f238ceebca2d1aa9f9b00bf9c2b221fa6313439ebc7',
'IPv4Address': '172.18.0.2/16',
'IPv6Address': '',
'MacAddress': '02:42:ac:12:00:02',
'Name': 'jasperreports-servers_playbook_run_112'}},
'Created': '2018-06-27T13:27:42.910819198-07:00',
'Driver': 'bridge',
...
compose.cli.verbose_proxy.proxy_callable: docker inspect_image <- ('dockerhub.alea.ca/docker/ansible')
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.24/images/dockerhub.alea.ca/docker/ansible/json HTTP/1.1" 200 None
compose.cli.verbose_proxy.proxy_callable: docker inspect_image -> {'Architecture': 'amd64',
'Author': '',
'Comment': '',
'Config': {'ArgsEscaped': True,
'AttachStderr': False,
'AttachStdin': False,
'AttachStdout': False,
'Cmd': None,
'Domainname': '',
'Entrypoint': ['ansible-playbook'],
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=jasperreports-servers', 'com.docker.compose.service=playbook', 'com.docker.compose.oneoff=True']})
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.24/containers/json?limit=-1&all=1&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Djasperreports-servers%22%2C+%22com.docker.compose.service%3Dplaybook%22%2C+%22com.docker.compose.oneoff%3DTrue%22%5D%7D HTTP/1.1" 200 None
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 112 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('35ef0763b3a38a776409ef8d426f9fd1eb71549cb13c4166c638d33b7c77bd34')
ERROR: compose.cli.errors.log_timeout_error: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
Yep, seeing this exact problem on docker for mac 18.06.1-ce, same point in the verbose compose log as @Routhinator's log, and I can reproduce the timeout by just running:
docker inspect 3b38be021afd3c4e66aeb1994b9de459779b5b79d2cf8c37c7e4631a9c105ffc
Verbose compose log:
...
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('d68bea638d936cfcc7b5650c75a9084528c77b7bdc039506205a0c506e69cd48')
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/containers/d68bea638d936cfcc7b5650c75a9084528c77b7bdc039506205a0c506e69cd48/json HTTP/1.1" 200 None
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> {'AppArmorProfile': '',
'Args': ['chamber', 'exec', 'db-mysql-root', '--', '/start_mysqld.sh'],
'Config': {'ArgsEscaped': True,
'AttachStderr': False,
'AttachStdin': False,
'AttachStdout': False,
'Cmd': ['chamber',
'exec',
'db-mysql-root',
'--',
...
compose.cli.verbose_proxy.proxy_callable: docker containers <- (all=True, filters={'label': ['com.docker.compose.project=dev-environment', 'com.docker.compose.service=institution', 'com.docker.compose.oneoff=False']})
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/containers/json?limit=-1&all=1&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Ddev-environment%22%2C+%22com.docker.compose.service%3Dinstitution%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 1422
compose.cli.verbose_proxy.proxy_callable: docker containers -> (list with 1 items)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('3b38be021afd3c4e66aeb1994b9de459779b5b79d2cf8c37c7e4631a9c105ffc')
ERROR: compose.cli.errors.log_timeout_error: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
I found that, for 17.12.0
, 17.12.1
, 18.06.0
It hangs on inspecting a FROZEN
state container.
show a container if it is FROZEN
cat /sys/fs/cgroup/freezer/docker/*/freezer.state
It never hangs for 1.12.6
.
We used cgroup to set FROZEN/THAWED state for a container instead of docker pause/unpause
for better performance, when inspecting a FROZEN container, docker inspect randomly hangs on 18.06 CE, there was no such problem at 1.12.X
+1 on this issue. Restarting my MacBook and thus the docker processes resolved the problem.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
comment.
Stale issues will be closed after an additional 30d of inactivity.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle stale
/remove-lifecycle stale
Just hit this on Docker version 18.09.1, build 4c52b90
on a MacBook Pro running Mojave, two running containers.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
comment.
Stale issues will be closed after an additional 30d of inactivity.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle stale
/remove-lifecycle stale
Had this issue as well. Docker version 18.09.5, os ubuntu 18.04.2. This happened to a container that was in an unhealthy state, and probable restarted a few (or many?) times (cant debug this properly, as inspect doesnt work)
Seeing issue as well
Mojave (10.14.5)
Docker version 18.09.2, build 6247962
docker-compose version 1.24.0, build 0aa5906
docker-machine version 0.16.1, build cce350d7
For me it's docker network inspect <compose-network>
that causes the freeze.
I have to restart docker in order to interact with any docker things after the freeze.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
comment.
Stale issues will be closed after an additional 30d of inactivity.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle stale
/remove-lifecycle stale
Same issue for me:
❯ docker --version
Docker version 19.03.5, build 633a0ea
❯ docker-compose --version
docker-compose version 1.25.2, build 698e2846
Docker inspect hangs forever:
❯ docker inspect 1178c69659c0072248d50a6eb7b2f7f0c82247c8d04563ae07f8b132ac3ff926
93F181C5-DADB-4643-A474-B9B23618AC12/20200121222521
Same here. Running 19.03.5, docker inspect and docker network inspect hang forever
if docker mount cephfs ,and cephfs down cause docker inspect hang forever
Closed issues are locked after 30 days of inactivity. This helps our team focus on active issues.
If you have found a problem that seems similar to this, please open a new issue.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle locked
Expected behavior
docker inspect container_name
should print container detailsActual behavior
docker inspect container_name
hangs forever. Nothing happensInformation
Steps to reproduce the behavior