Open krykra7 opened 7 months ago
I'm still trying to figure out why it's happening, one difference between 4.22.1 and 4.28.0 I found is in network interfaces, It may not influence this particular issue but still I'm curious why there are additional interfaces present:
result of ip addr show
in vpn container:
4.22.1
:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1000
link/tunnel6 :: brd :: permaddr ca4a:95e:aa5f::
4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UNKNOWN group default qlen 500
link/none
inet 172.30.1.131/32 scope global tun0
valid_lft forever preferred_lft forever
13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
4.28.0
:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
link/gre 0.0.0.0 brd 0.0.0.0
4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
6: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
7: ip6_vti0@NONE: <NOARP> mtu 1428 qdisc noop state DOWN group default qlen 1000
link/tunnel6 :: brd :: permaddr c642:a11e:9e62::
8: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
9: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN group default qlen 1000
link/tunnel6 :: brd :: permaddr 7a8a:60b7:8da3::
10: ip6gre0@NONE: <NOARP> mtu 1448 qdisc noop state DOWN group default qlen 1000
link/gre6 :: brd :: permaddr 2a1e:d3d4:6233::
11: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UNKNOWN group default qlen 500
link/none
inet 172.30.1.131/32 scope global tun0
valid_lft forever preferred_lft forever
25: eth0@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
Additionally I was trying to prepare full test, and tried with openVPN which actually is working as expected with the same configuration in both versions.
One more quick update, on the newest version of docker engine (Docker version 25.0.5, build 5dc9bcc) on linux exactly the same configuration is working as expected, so it's related only to version for mac higher than 4.22.1
Description
The problem started from docker desktop version 4.23 and is present up to current version, until version 4.22.1 everything was working as expected.
I have two containers, one is using ubuntu base image and openconnect to serve as VPN, the second one serves proxy.pac file for proxy configuration.
Sadly I don't have any test instance of VPN to present that live, but here is examplary dockerfile that could illustrate situation:
Examplary
Dockerfile
forproxy-config
Examplary
Dockerfile
fortest-vpn
:start.sh
would start connection to VPN in backgroundI observed that port
:8119
is correctly serving files before openconnection is established, but after it's no longer accessible. I checked changes in docker engine, compose and changelists from version 4.22.1 -> 4.23 and couldn't find any significant changes to networks that could affect that.Reproduce
In version 4.23+
docker compose up -d
curl localhost:8119
curl localhost:8119
curl: (56) Recv failure: Connection reset by peer
In version 4.22.1
docker compose up -d
curl localhost:8119
curl localhost:8119
Expected behavior
All exposed ports should be accessible from host machine, behaviour should be the same in version 4.22.1 and 4.23+ based on changelogs.
docker version
docker info
Diagnostics ID
Nothing. Message me if needed
Additional Info
No response