Open jo-tools opened 1 month ago
This might be related to Issue https://github.com/docker/for-mac/issues/7324
We have a similar issue. host.docker.internal
now resolves to an IPv6 address.
Confirm with, for example: docker run --rm alpine:latest getent hosts host.docker.internal
We have a similar issue.
host.docker.internal
now resolves to an IPv6 address.
Interesting...
ping
uses the IPv4 address, and it can't ping the IPv6 address:
ping: connect: Network is unreachable
# getent hosts host.docker.internal
fdc4:f303:9324::254 host.docker.internal
# nslookup -query=AAAA host.docker.internal
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: host.docker.internal
Address: fdc4:f303:9324::254
# nslookup -query=A host.docker.internal
Server: 127.0.0.11
Address: 127.0.0.11#53
Non-authoritative answer:
Name: host.docker.internal
Address: 192.168.65.254
# ping -c 2 host.docker.internal
PING host.docker.internal (192.168.65.254) 56(84) bytes of data.
64 bytes from 192.168.65.254 (192.168.65.254): icmp_seq=1 ttl=63 time=0.285 ms
64 bytes from 192.168.65.254 (192.168.65.254): icmp_seq=2 ttl=63 time=0.702 ms
--- host.docker.internal ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.285/0.493/0.702/0.208 ms
# ping -c 2 192.168.65.254
PING 192.168.65.254 (192.168.65.254) 56(84) bytes of data.
64 bytes from 192.168.65.254: icmp_seq=1 ttl=63 time=0.304 ms
64 bytes from 192.168.65.254: icmp_seq=2 ttl=63 time=0.345 ms
--- 192.168.65.254 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1043ms
rtt min/avg/max/mdev = 0.304/0.324/0.345/0.020 ms
# ping -c 2 fdc4:f303:9324::254
ping: connect: Network is unreachable
# ping -c 2 host.docker.internal -6
ping: connect: Network is unreachable
# ping -c 2 ip6-localhost
PING ip6-localhost(localhost (::1)) 56 data bytes
64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.025 ms
64 bytes from localhost (::1): icmp_seq=2 ttl=64 time=0.041 ms
--- ip6-localhost ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1046ms
rtt min/avg/max/mdev = 0.025/0.033/0.041/0.008 ms
So it seems the issue is that with Docker v4.31.0 host.docker.internal
resolves to an IPv6 address, which is in an unreachable network...
...so any service trying to (preferrably) use the IPv6 address won't work any longer (unless it uses the working IPv4 address as a fallback)
Thank you for documenting this issue. Several on my team have encountered it when upgrading.
My team is also stuck on 4.30 because of it.
Note: The Docker Image used in the original post (which lead to discover this isse) has been updated with a fix for this issue.
If someone wants to reproduce using the originally posted steps, then the mentioned docker-compose.yml
needs to be changed to use the previous/affected docker image:
Reproduce
- Use this
docker-compose.yml
: https://github.com/jo-tools/docker/blob/main/local-cubesql-volumes/docker-compose.yml
editdocker-compose.yml
and change
from:image: jotools/cubesql-webadmin
(the 'latest' is updated with a fix for this issue) to:image: jotools/cubesql-webadmin:1.0.0
(use the affected image which works with Docker < v4.31.0, but fails to connect with Docker v4.31.0)docker-compose up -d
- Open the Web Admin Tool in the Browser:
http://localhost:4431
- Push the Button
Connect
Anyway - I don't think it's necessary to use that docker compose setup to reproduce this issue.
The other replies show in more detail what the underlying issue is.
Similar issue here. An nginx container resolves host.docker.internal
to an IPv6 address and then cannot reach it.
@jo-tools what did you change in the cubesql-webadmin image to make it work on 4.31 ?
@jo-tools what did you change in the cubesql-webadmin image to make it work on 4.31 ?
I've fixed the client connector (*).
If the hostname resolves to both IPv4 and IPv6 addresses, it'll try to connect to both and uses the first successful one (which in Docker v4.31 will be the IPv4 address, since the IPv6 address can't be reached).
The bug in the client connector had been that it aborted with every error before (e.g. because of the IPv6 address not reachable), and didn't try the other resolved address instead. The good thing with this (now fixed/improved) client connector bug has been that this Docker issue has been discovered ;)
Edit: (*) this means: fixed the service running inside the docker container to cope with hostnames resolving to both IPv4 and IPv6 addresses. And should one of the two not work, it'll fall back and use the other one.
I may have missed it, but is there a config or something to work around this? We just hit it with our local Envoy containers refusing to connect to other services running in Compose. Downgrade seems to be the only straightforward option, but if there's another way, that would be great.
Making our app server and dependencies work with IPv6 would take a decent amount of work, given that work would all be individually container specific.
(I'm seeing this on Windows Docker Desktop 4.32 as well.)
Description
I'm running the following Docker Compose setup:
https://github.com/jo-tools/docker/blob/main/local-cubesql-volumes/docker-compose.yml
This runs a Database Server (cubeSQL) and a web administration tool. The setup is preconfigured, so that one can just click 'Connect' in the Web Admin to successfully connect:
host.docker.internal
4430
No Issues with Docker v.4.30.0 and earlier
However, with Docker v4.31.0 the connect can't be established any longer.
Interestingly, changing the Host from
host.docker.internal
to the effective Host's IP (e.g.192.168.1.x
) works to establish a connection.So something has changed in v4.31.0 which causes issues with Network Connections within/between Containers.
Reproduce
docker-compose.yml
:https://github.com/jo-tools/docker/blob/main/local-cubesql-volumes/docker-compose.yml
docker-compose up -d
http://localhost:4431
Connect
Expected behavior
Connection to the Database Server via
host.docker.internal
on Port4430
can be established.Actual Behavior:
host.docker.internal:4430
)cubesql:4430
), which should also work since that's the hostname of the Network that the two containers are both part of.192.168.1.x
)docker version
docker info
Diagnostics ID
D8B99358-A952-49AC-A00D-0CC40DA51EB0/20240616195445
Additional Info