Closed secDre4mer closed 6 months ago
cc @sbrivio-rh @dgibson
@secDre4mer, thanks for reporting this. I didn't imagine this would be a common use case without Wireguard, and especially that this setup would actually work with Podman and slirp4netns, so I never really thought it would be a priority.
See also https://bugs.passt.top/show_bug.cgi?id=49#c4 -- in your case, I think just the first of those three points applies, and the fix should be relatively simple.
To confirm that: does passing -M / --mac-addr
to pasta work around the issue for you? That is, for example, podman run --net=pasta:-M,00:00:5e:00:01:01 ...
Thanks for the quick answer! Results when using --net=pasta:-M,00:00:5e:00:01:01
, are better, but still not fully functional: Container creation works, but the resulting container does not have a working network connection (trying to connect to any host, inside or outside the VPN, results in a timeout).
Thanks for the quick answer! Results when using
--net=pasta:-M,00:00:5e:00:01:01
, are better, but still not fully functional: Container creation works,
Thanks for checking!
the resulting container does not have a working network connection (trying to connect to any host, inside or outside the VPN, results in a timeout)
I can't reproduce this once I explicitly pass a MAC address -- that is, connectivity works for me then. Could you share some details about addresses and routes (ip address show
, ip route show
, ip -6 route show
) outside and inside the container?
You can rewrite addresses to documentation/example addresses or even to arbitrary (but consistent) strings, for privacy.
Does UDP (e.g. name resolution) work, by the way?
Sure, here you go:
Outside the container:
$ ip address show
3: wlp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether <wlan-mac> brd ff:ff:ff:ff:ff:ff
inet <wlan-ip>/24 brd <wlan-broadcast> scope global dynamic noprefixroute wlp5s0
valid_lft 79006sec preferred_lft 79006sec
inet6 <wlan-ip6>/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 500
link/none
inet <tun-ip> peer <tun-peer-ip>/32 scope global noprefixroute tun0
valid_lft forever preferred_lft forever
inet6 <tun-ip6>/64 scope link stable-privacy proto kernel_ll
valid_lft forever preferred_lft forever
$ ip route show
default via <tun-peer-ip> dev tun0 proto static metric 50
default via <wlan-route> dev wlp5s0 proto dhcp src <wlan-ip> metric 600
<tun-net>.1 via <tun-peer-ip> dev tun0 proto static metric 50
<tun-peer-ip> dev tun0 proto kernel scope link src <tun-ip> metric 50
<vpn-public-ip> via <wlan-route> dev wlp5s0 proto static metric 50
<wlan-net> dev wlp5s0 proto kernel scope link src <wlan-ip> metric 600
<wlan-route> dev wlp5s0 proto static scope link metric 50
$ ip -6 route show
fe80::/64 dev tun0 proto kernel metric 256 pref medium
fe80::/64 dev wlp5s0 proto kernel metric 1024 pref medium
Inside the container:
$ ip address show
2: tun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN group default qlen 1000
link/ether <tun-mac-address> brd ff:ff:ff:ff:ff:ff
inet <tun-ip> peer <tun-peer-ip>/32 scope global noprefixroute tun0
valid_lft forever preferred_lft forever
inet6 <container-tun-ip6>/64 scope link
valid_lft forever preferred_lft forever
$ ip route show
default via <tun-peer-ip> dev tun0 proto static metric 50
<tun-net>.1 via <tun-peer-ip> dev tun0 proto static metric 50
<tun-peer-ip> dev tun0 proto kernel scope link metric 50
$ ip -6 route show
fe80::/64 dev tun0 proto kernel metric 256 pref medium
(I removed the loopback devices and the LAN device that was down anyway)
UDP does not work either (tested with nslookup and with ncat -u
)
Oh, I think this might explain it:
Outside the container:
$ ip address show [...] 4: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 500 link/none inet <tun-ip> peer <tun-peer-ip>/32 scope global noprefixroute tun0 valid_lft forever preferred_lft forever
Your OpenVPN client configures a point-to-point topology, whereas all the OpenVPN setups I have at hand all operate in subnet
mode, for example:
$ ip -4 address show
[...]
64: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 500
inet 10.13.31.8/24 brd 10.13.31.255 scope global noprefixroute tun0
valid_lft forever preferred_lft forever
There's no "peer" IP address, I simply have a /24 subnet. This peer IP address, with pasta, is then copied as-is into the target network namespace (container), but the tun0
you have inside the namespace is fairly different from the actual OpenVPN endpoint, and I doubt that will work.
I don't have the chance of trying the same OpenVPN mode right now, so it would help me debugging this if you could try a couple of things:
--no-copy-addrs
to pasta (--net=pasta:-M,...,-no-copy-addrs
), so that the address is not copied exactly as it's found on the host, but simply configured with the same subnet (there should be no peer IP address in the container)--topology subnet
to your OpenVPN client. That needs matching support on the server (OpenVPN 2.1 or later). At that point you should have my same type of address configuration-p / --pcap
to pasta to capture some traffic namespace-side (a name resolution or a failed TCP connection is enough), i.e. --net=pasta,...,-p,/tmp/pasta_openvpn.pcap
and share e.g. a tshark-style dump, even better if correlated with a matching capture on the host side (tcpdump
, tshark
or suchlike).A kind of OT question: Why pasta
requires a mac address? Processing layers below IP seems not necessary if I just want my container connected to a network.
Background info: my default route is pointed at a tun
device with POINTOPOINT
set. This is mandatory.
> ip addr
// ...
5: tun3: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 500
link/none
inet <redacted>/24 scope global tun0
valid_lft forever preferred_lft forever
inet6 <redacted>/64 scope link stable-privacy proto kernel_ll
valid_lft forever preferred_lft forever
A kind of OT question: Why
pasta
requires a mac address?
For no particular reason other than the fact that the original implementation connecting virtual machines, passt(1), only operates with guest-side Layer-2 interfaces, and that also with containers you usually have a tap interface, not a tun one.
Processing layers below IP seems not necessary if I just want my container connected to a network.
Right, see https://bugs.passt.top/show_bug.cgi?id=49#c5 -- support for container-side tun interfaces is something we have to implement. It's simpler than a Layer-2 interface, but it takes a few adjustments especially in configuration code.
passing --no-copy-addrs to pasta (--net=pasta:-M,...,-no-copy-addrs), so that the address is not copied exactly as it's found on the host, but simply configured with the same subnet (there should be no peer IP address in the container)
--no-copy-addrs
works, the resulting configuration looks like:
tun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN group default qlen 1000
link/ether <tun-mac-address> brd ff:ff:ff:ff:ff:ff
inet <tun-ip>/32 scope global tun0
valid_lft forever preferred_lft forever
inet6 <container-tun-ip6>/64 scope link
valid_lft forever preferred_lft forever
Network connections from this container work correctly (both TCP and UDP).
passing --no-copy-addrs to pasta (--net=pasta:-M,...,-no-copy-addrs), so that the address is not copied exactly as it's found on the host, but simply configured with the same subnet (there should be no peer IP address in the container)
--no-copy-addrs
works, the resulting configuration looks like:tun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN group default qlen 1000 link/ether <tun-mac-address> brd ff:ff:ff:ff:ff:ff inet <tun-ip>/32 scope global tun0 valid_lft forever preferred_lft forever inet6 <container-tun-ip6>/64 scope link valid_lft forever preferred_lft forever
Network connections from this container work correctly (both TCP and UDP).
Great, thanks for testing! And how do the routes in the container (ip route show
, ip -6 route show
) look like in this case?
Routes are as follows:
$ ip route show
default via <tun-peer-ip> dev tun0 proto static metric 50
<tun-net>.1 <tun-peer-ip> dev tun0 proto static metric 50
<tun-peer-ip> dev tun0 proto kernel scope link metric 50
$ ip -6 route show
fe80::/64 dev tun0 proto kernel metric 256 pref medium
I posted two patches for pasta to address this issue, now pending review:
https://archives.passt.top/passt-dev/20240411221800.548166-1-sbrivio@redhat.com/ https://archives.passt.top/passt-dev/20240411221800.548178-1-sbrivio@redhat.com/
a test would be appreciated, even though I tested on a setup that reasonably resembles the one described here.
pasta with the patch doesn't work yet (network connectivity is broken); however, when comparing the IPs / Routes to the setup with --no-copy-addrs
, I noticed that's my own fault.
I mangled one anonymization for ip address show
in the resulting container, writing <tun-ip>
when it should have been <tun-peer-ip>
. The setup with --no-copy-addrs
is:
tun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN group default qlen 1000
link/ether <tun-mac-address> brd ff:ff:ff:ff:ff:ff
inet <tun-peer-ip>/32 scope global tun0
valid_lft forever preferred_lft forever
inet6 <container-tun-ip6>/64 scope link
valid_lft forever preferred_lft forever
I'm very sorry for that miss and the extra effort I've caused, @sbrivio-rh .
I'm very sorry for that miss and the extra effort I've caused, @sbrivio-rh .
Ah, never mind, thanks a lot for double checking.
pasta with the patch doesn't work yet (network connectivity is broken); however, when comparing the IPs / Routes to the setup with
--no-copy-addrs
, I noticed that's my own fault. I mangled one anonymization forip address show
in the resulting container, writing<tun-ip>
when it should have been<tun-peer-ip>
. The setup with--no-copy-addrs
is:tun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN group default qlen 1000 link/ether <tun-mac-address> brd ff:ff:ff:ff:ff:ff inet <tun-peer-ip>/32 scope global tun0 valid_lft forever preferred_lft forever inet6 <container-tun-ip6>/64 scope link valid_lft forever preferred_lft forever
That's because we look for IFA_ADDRESS
in the implementation where we obtain the address to configure a single one with --no-copy-addrs
(as opposed to copying all of them), and I think that's actually wrong.
We should be using IFA_LOCAL
instead -- see https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/uapi/linux/if_addr.h?id=586b5dfb51b962c1b6c06495715e4c4f76a7fc5a#n16. This only matters for point-to-point links (IFA_LOCAL
and IFA_ADDRESS
are the same otherwise), so we never noticed.
But I don't think that's what breaking connectivity here: as long as there's an address (not point-to-point) configured in the container, and a route that can work with it, things should work. It doesn't really matter if we copy that from the peer address or from the local address.
Can you have a look at routes that are being created in container with the patch, in comparison with --no-copy-addrs
? Traffic captures might help, too, unless it becomes obvious from looking at routes.
Routes are just like with --no-copy-addrs
:
default via <tun-peer-ip> dev tun0 proto static metric 50
<tun-net>.1 <tun-peer-ip> dev tun0 proto static metric 50
<tun-peer-ip> dev tun0 proto kernel scope link metric 50
I think, however, that <tun-peer-ip>
is not reachable (because tun0
only has <tun-ip>/32
as net), right? So the routes won't be usable.
I think, however, that
is not reachable (because tun0 only has /32 as net), right? So the routes won't be usable.
Good catch, yes, I think you're right.
Well, assigning the peer address to the container would fix the issue, but it's quite ugly, not to mention incorrect: what if something in the container really relies having the same address as the host (i.e. "no NAT")? I'm looking for a better solution, possibly a simple one. Maybe slightly tweaking routes just in this case, or something like that.
I think I'm affected by a similar/related issue:
I'm using the ProtonVPN app to connect to vpn's using openvpn - it spins up two interfaces on the host:
11: ipv6leakintrf0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether ca:d8:30:49:de:71 brd ff:ff:ff:ff:ff:ff
inet6 fdeb:446c:912d:8da::/64 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::a5b9:1126:a0d2:f1e4/64 scope link noprefixroute
valid_lft forever preferred_lft forever
12: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 500
link/none
inet 10.96.0.2/16 brd 10.96.255.255 scope global noprefixroute tun0
valid_lft forever preferred_lft forever
inet6 fe80::d5f7:d76:6c63:5b14/64 scope link stable-privacy proto kernel_ll
valid_lft forever preferred_lft forever
After the upgrade from podman v4 to v5 where pasta got the default I encountered issues that my containers couldn't connect to the internet (noticed due to dns in the containers failed)
With podman unshare --rootless-netns ip addr
I was able to figure out that the issue appears to be that the ipv6leakintrf0 is selected by pasta not the tun0. So I researched how to change this:
podman run --network=pasta:--ipv4-only,--outbound-if4,tun0,--interface,tun0 busybox:latest sh
Error: pasta failed with exit code 1:
External interface not usable
with the suggested -M (mac-address) workaround:
podman run --network=pasta:--ipv4-only,--outbound-if4,tun0,--interface,tun0,-M,00:00:5e:00:01:01 busybox:latest sh
I get no error-message but:
Apr 23 14:37:49 bsPF1201 podman[224462]: 2024-04-23 14:37:49.07470019 +0200 CEST m=+0.107087004 container init b27ea7908b9602952f8317d549873d8848153ea4a6de757d26bfd8fda4921f10 (image=docker.io/library/busybox:latest, name=ecstatic_montalcini)
Apr 23 14:37:49 bsPF1201 podman[224462]: 2024-04-23 14:37:49.079252821 +0200 CEST m=+0.111639583 container start b27ea7908b9602952f8317d549873d8848153ea4a6de757d26bfd8fda4921f10 (image=docker.io/library/busybox:latest, name=ecstatic_montalcini)
Apr 23 14:37:49 bsPF1201 podman[224462]: 2024-04-23 14:37:49.082145732 +0200 CEST m=+0.114532493 container attach b27ea7908b9602952f8317d549873d8848153ea4a6de757d26bfd8fda4921f10 (image=docker.io/library/busybox:latest, name=ecstatic_montalcini)
Apr 23 14:37:49 bsPF1201 podman[224462]: 2024-04-23 14:37:48.983952484 +0200 CEST m=+0.016339219 image pull ba5dc23f65d4cc4a4535bce55cf9e63b068eb02946e3422d3587e8ce803b6aab busybox:latest
Apr 23 14:37:49 bsPF1201 podman[224462]: 2024-04-23 14:37:49.082396355 +0200 CEST m=+0.114783123 container died b27ea7908b9602952f8317d549873d8848153ea4a6de757d26bfd8fda4921f10 (image=docker.io/library/busybox:latest, name=ecstatic_montalcini)
Apr 23 14:37:49 bsPF1201 podman[224494]: 2024-04-23 14:37:49.126389643 +0200 CEST m=+0.035529412 container cleanup b27ea7908b9602952f8317d549873d8848153ea4a6de757d26bfd8fda4921f10 (image=docker.io/library/busybox:latest, name=ecstatic_montalcini)
Is there a way to get this working again?
Update sorry - my command for running with -M was somehow wrong:
podman run --network=pasta:--ipv4-only,--outbound-if4,tun0,--interface,tun0,-M,00:00:5e:00:01:01 --rm -it busybox busybox sh
works and I verified by curling "ifconfig.co" that my ip-address is the vpn exit-ip. Will test if I encounter further issues ✌🏻
This is a bit different because:
12: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 500 link/none inet 10.96.0.2/16 brd 10.96.255.255 scope global noprefixroute tun0 valid_lft forever preferred_lft forever inet6 fe80::d5f7:d76:6c63:5b14/64 scope link stable-privacy proto kernel_ll valid_lft forever preferred_lft forever
your tun0
interface isn't configured with a point-to-point address like in the original report: it's a subnet, so, once pasta picks it as upstream interface, minus the issue with the MAC address (I just applied the patch for that part upstream), things work as you confirmed.
The role of ipv6leakintrf0
is rather clear to me, what I'm not sure about is why pasta would pick that as upstream interface instead of tun0
. Could you also share your detail of IPv4 (ip route show
) and IPv6 (ip -6 route show
) routes on the host? Perhaps pasta could be fixed to pick interfaces more wisely.
Thanks for the fast response and context, really appreciate this 🙏🏻
So it's this and this one, correct? Then I would apply these and rebuild pasta locally. Very novice question - can I download only the patch somehow without copying text into files from there?
Sure, as upstream does not support ipv6 on all servers so far they spin up this interface to avoid the user leaking traffic/their ip through isp-v6.
ip route show:
default via 10.96.0.1 dev tun0 proto static metric 50
default via 10.10.27.1 dev wlp166s0 proto dhcp src 10.10.27.195 metric 600
10.10.27.0/24 dev wlp166s0 proto kernel scope link src 10.10.27.195 metric 600
10.10.27.1 dev wlp166s0 proto static scope link metric 50
10.96.0.0/16 dev tun0 proto kernel scope link src 10.96.0.2 metric 50
185.xxx.xxx.xx via 10.10.27.1 dev wlp166s0 proto static metric 50
ip -6 route show:
2001:xxx:40d:xxx::/64 dev wlp166s0 proto ra metric 600 pref medium
2001:xxx:40d:xxx::/56 via fe80::a88e:f0ff:fe79:c65a dev wlp166s0 proto ra metric 600 pref medium
fd21:7ede:b152:1::/64 via fe80::b6e4:54ff:fef0:8dd1 dev wlp166s0 proto ra metric 600 pref medium
fd27:1:1:20::/64 dev wlp166s0 proto ra metric 600 pref medium
fd27:1:1::/48 via fe80::a88e:f0ff:fe79:c65a dev wlp166s0 proto ra metric 600 pref medium
fdeb:446c:912d:8da::/64 dev ipv6leakintrf0 proto kernel metric 95 pref medium
fe80::/64 dev tun0 proto kernel metric 256 pref medium
fe80::/64 dev wlp166s0 proto kernel metric 1024 pref medium
fe80::/64 dev ipv6leakintrf0 proto kernel metric 1024 pref medium
default via fdeb:446c:912d:8da::1 dev ipv6leakintrf0 proto static metric 95 pref medium
default via fe80::a88e:f0ff:fe79:c65a dev wlp166s0 proto ra metric 600 pref medium
Had to censor some public-ips hope that's still helpful otherwise let me know 🙏🏻
So it's this
This one yes, but note that it's already merged (not in any release yet), so you can just use the current HEAD
, you don't need to apply it manually.
and this one, correct?
This one, it turns out, didn't solve the original issue anyway, and won't help with yours (because you don't have a point-to-point peer address configured), so you don't need this.
Then I would apply these and rebuild pasta locally. Very novice question - can I download only the patch somehow without copying text into files from there?
So, well, you don't need to apply any patch from the list here, but if you wanted to do so:
$ git clone git://passt.top/passt
$ cd passt
$ curl https://archives.passt.top/passt-dev/20240411221800.548178-1-sbrivio@redhat.com/raw | git am
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3835 0 3835 0 0 11597 0 --:--:-- --:--:-- --:--:-- 11621
Applying: netlink: Drop point-to-point peer information when we copy addresses
just look for the "mbox" or "raw" link on the archives (or simply add /raw
manually).
Sure, as upstream does not support ipv6 on all servers so far they spin up this interface to avoid the user leaking traffic/their ip through isp-v6.
Right, makes sense, thanks for confirming.
ip route show:
default via 10.96.0.1 dev tun0 proto static metric 50 default via 10.10.27.1 dev wlp166s0 proto dhcp src 10.10.27.195 metric 600 10.10.27.0/24 dev wlp166s0 proto kernel scope link src 10.10.27.195 metric 600 10.10.27.1 dev wlp166s0 proto static scope link metric 50 10.96.0.0/16 dev tun0 proto kernel scope link src 10.96.0.2 metric 50 185.xxx.xxx.xx via 10.10.27.1 dev wlp166s0 proto static metric 50
ip -6 route show:
2001:xxx:40d:xxx::/64 dev wlp166s0 proto ra metric 600 pref medium 2001:xxx:40d:xxx::/56 via fe80::a88e:f0ff:fe79:c65a dev wlp166s0 proto ra metric 600 pref medium fd21:7ede:b152:1::/64 via fe80::b6e4:54ff:fef0:8dd1 dev wlp166s0 proto ra metric 600 pref medium fd27:1:1:20::/64 dev wlp166s0 proto ra metric 600 pref medium fd27:1:1::/48 via fe80::a88e:f0ff:fe79:c65a dev wlp166s0 proto ra metric 600 pref medium fdeb:446c:912d:8da::/64 dev ipv6leakintrf0 proto kernel metric 95 pref medium fe80::/64 dev tun0 proto kernel metric 256 pref medium fe80::/64 dev wlp166s0 proto kernel metric 1024 pref medium fe80::/64 dev ipv6leakintrf0 proto kernel metric 1024 pref medium default via fdeb:446c:912d:8da::1 dev ipv6leakintrf0 proto static metric 95 pref medium default via fe80::a88e:f0ff:fe79:c65a dev wlp166s0 proto ra metric 600 pref medium
Had to censor some public-ips hope that's still helpful otherwise let me know 🙏🏻
No no, it's helpful. Looking at these routes, I think that pasta would actually pick tun0
for IPv4 (it can select different interfaces for different IP versions) with the patch to accept upstream interfaces without an own MAC address.
Can you check if it's already solved on the current upstream HEAD
and, if not, what addresses and routes you get inside the container?
Hey, thanks for the explanation - you are totally right latest master is enough ✌🏻
Fixed it for me:
without VPN: podman unshare --rootless-netns ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host proto kernel_lo
valid_lft forever preferred_lft forever
2: wlp166s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN group default qlen 1000
link/ether da:b7:56:83:8a:c5 brd ff:ff:ff:ff:ff:ff
inet 10.10.27.195/24 brd 10.10.27.255 scope global noprefixroute wlp166s0
valid_lft forever preferred_lft forever
inet6 fd27:1:1:20:8ef8:c5ff:fe75:1877/64 scope global tentative mngtmpaddr noprefixroute
valid_lft forever preferred_lft forever
inet6 fd27:1:1:20:5952:f5f5:3e0:e260/64 scope global tentative
valid_lft forever preferred_lft forever
inet6 2001:a61:xxx:d20:xxxx:c5ff:fe75:xxxx/64 scope global tentative mngtmpaddr noprefixroute
valid_lft forever preferred_lft forever
inet6 2001:xxx:40d:xxx:120b:dc8f:6aa7:xxxx/64 scope global tentative
valid_lft forever preferred_lft forever
inet6 fe80::d8b7:56ff:fe83:8ac5/64 scope link tentative proto kernel_ll
valid_lft forever preferred_lft forever
with VPN: podman unshare --rootless-netns ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host proto kernel_lo
valid_lft forever preferred_lft forever
2: tun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel state UNKNOWN group default qlen 1000
link/ether 72:5e:30:38:de:43 brd ff:ff:ff:ff:ff:ff
inet 10.96.0.3/16 brd 10.96.255.255 scope global noprefixroute tun0
valid_lft forever preferred_lft forever
inet6 fdeb:446c:912d:8da::/64 scope global tentative noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::705e:30ff:fe38:de43/64 scope link tentative proto kernel_ll
valid_lft forever preferred_lft forever
podman run --rm -it busybox busybox sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc fq_codel qlen 1000
link/ether 46:90:f6:3a:97:48 brd ff:ff:ff:ff:ff:ff
inet 10.96.0.3/16 brd 10.96.255.255 scope global noprefixroute tun0
valid_lft forever preferred_lft forever
inet6 fdeb:446c:912d:8da::/64 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::4490:f6ff:fe3a:9748/64 scope link
valid_lft forever preferred_lft forever
/ # nslookup google.de
Server: 169.254.0.1
Address: 169.254.0.1:53
Non-authoritative answer:
Name: google.de
Address: 142.250.201.163
Non-authoritative answer:
Name: google.de
Address: 2a00:1450:4007:81a::2003
Thanks for the help 🙏🏻
Hey, thanks for the explanation - you are totally right latest master is enough ✌🏻
Fixed it for me:
[...]
Thanks for following up and for all the information!
@secDre4mer, I think the issue here is a bit more subtle: it's actually fine to replicate the point-to-point configuration together with peer address in the namespace, but we need to resolve the gateway/peer address (via ARP).
And pasta currently doesn't do this as, internally, the notion of "our" IPv4 address was taken from the peer address (IFA_ADDRESS
), instead of IFA_LOCAL
. We don't want to resolve our own IPv4 address because some DHCP clients might rely on it not being resolved to conclude the address is in fact free, see the related fix.
I just posted a patch that hopefully fixes this issue for good. Would you have a chance to try it, on top of the current HEAD
, and see if it solves your issue? Thanks.
I also get this error when running OpenVPN in a rootfull container in its own podman network. rootless containers return Error: pasta failed with exit code 1: External interface not usable
I also get this error when running OpenVPN in a rootfull container in its own podman network. rootless containers return Error: pasta failed with exit code 1:
I didn't understand: does your issue happen both with root and rootless?
In any case: would you have a chance to try the patch I posted on top of the current HEAD
and see if it fixes connectivity for you?
just rootless, airvpn running in rootfull container, rootfull containers work fine, but rootless containers are broken.
This should now be fixed in passt version 2024_04_26.d03c4e2, Fedora 40 update available, Arch Linux package usually follows soon.
This should now be fixed in passt version 2024_04_26.d03c4e2, Fedora 40 update available, Arch Linux package usually follows soon.
Submitted updated for NixOS upstream 🙏🏻
A workaround for anyone else waiting for the fix: connecting the container to the host
network:
podman run --rm -it --network host your-image
Not great, but it suffices for my needs.
Can confirm this works with Archlinux. Thank you, @sbrivio-rh !
Issue Description
When starting a rootless podman container while connected to a VPN (via openvpn), Podman fails with:
Steps to reproduce the issue
Steps to reproduce the issue
Describe the results you received
podman run
fails with:Describe the results you expected
Container should still be able to start up.
podman info output
Podman in a container
No
Privileged Or Rootless
Rootless
Upstream Latest Release
Yes
Additional environment details
No response
Additional information
Based on a bit of debugging I did, this is due to the fact that
pasta
requires the outgoing interface to have a MAC address, which thetun0
interface used by the VPN does not.Using
slirp4netns
(via--net slirp4netns
) in this scenario works (can also be configured viadefault_rootless_network_cmd
incontainers.conf
).