Closed aleks-mariusz closed 2 years ago
@mccv1r0 Mind taking a look? This one seems very much like a CNI issue.
Looking... (meetings all a.m.) I just tested using fedora 30 and all this works. I run this setup 24x7.
An IPv6 client (nc -6
) from quarantine location has native IPv6 support. It connects via public Internet to podman container on "podman" bridge running on linode VM, which provides a /48 for all my podman container. I use this all the time so I know it works in general.
Things breaking on the host interface shouldn't happen, CNI / Podman don't touch it. How good is IPv6 support in Centos &? Possibly a CentoOS 7 issue.
at some point this happens (output of ip mon
):
Deleted default via fe80::21f:caff:feb2:ea40 dev eth0 proto ra metric 1024 expires 1751sec hoplimit 64 pref medium
so something related to starting the container is triggering dropping the default route.. i don't think this can be attributed to the OS itself.. i'd say ipv6 support is pretty solid in CentOS 7.. it's been in the linux kernel even way before version 3.10, what CentOS 7.x standardized on (as that is heavily backported by RedHat with more modern kernel patches)..
i'm wide open on ideas how to diagnose what could be invoking this? it could be a simple case of my setup being broken, plain PEBKAC/user error..
@mccv1r0 that's pretty much the same setup i would like (except ultimately i'd like to do it as rootless).. would you mind sharing how your setup/configs differs from the default podman installation and what versions you're using?
Is that output from ip mon
the containers eth0 or the hosts eth0? I'm guessing the host (which isn't related to podman) for several reasons.
You would have to be running ip mon
inside the container(?) Not ruling it out, but doubtful. Correctly me if I'm wrong.
Your output above is from the host:
fe80::21f:caff:feb2:ea40 dev eth0 lladdr 00:1f:ca:b2:ea:40 router REACHABLE lladdr ba:04:ac:9b:88:1d PERMANENT 43: cni-podman0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default link/ether ba:04:ac:9b:88:1d brd ff:ff:ff:ff:ff:ff 43: cni-podman0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether ba:04:ac:9b:88:1d brd ff:ff:ff:ff:ff:ff [deleted]
Another hint is expires
in the message from ip mon
. Unless you are running e.g. rtdvd on this host which advertise IPv6 prefixes to containers on cni-podman0 bridge, the RA is from something upstream of the host eth0. IPv6, by default, will try to auto-configure interfaces, including the hosts. This is orthogonal to podman (though you may have turned IPv6 on because of using it with podman.)
I'm curious, does eth0 on the host have an IPv6 Address?
For things related to podman:
2a03:8c00:1a:8::/64
range? ping6 2a03:8c00:1a:8::1
which should be the IPv6 address of cni-podman0 interface.That's pretty much all podman sets up. Let's focus on that first.
yes the ip mon output is from the host.. (i wouldn't see how i could have ran it inside the container if i'm setting the container up in the first place).
yes eth0 has several IPv6 address, link-local of course, as well as a routable GUA range i was assigned for ipv6 internet reachability..
and yes, i am giving the same range to the CNI in the 87-podman-bridg.conflist.. the container comes up with a GUA in that same range
after starting the container, yes the container can ping the host's cni-podman0 ipv6 address (ending in ::1) and the host can ping the cni generated IPv6 address of the container.. in that sense i guess the bridge config/veth attachment works as expected..
thing is, i'm just kind of taking a stab at using the host's GUA address but even when i tried to give it a ULA range, host ipv6 networking still broke
what address space are you use in your setup? is the /48 also shared by the host like in my setup? is it a GUA /48 or a ULA /48? how is routing happening, is it just you enabling one of the forwarding sysctl's (and doing SNAT or NPT in the case of non-GUA addresses)
after starting the container, yes the container can ping the host's cni-podman0 ipv6 address (ending in ::1) and the host can ping the cni generated IPv6 address of the container.. in that sense i guess the bridge config/veth attachment works as expected..
So it looks like podman
is FAD
podman
doesn't claim (@mheon correct me if I'm wrong) to automatically setup the host allow external systems to reach containers. For IPv4 port forwarding is used. I don't know if/when something like port forwarding is/will be supported for IPv6. We shouldn't need it, but... well, I'll spare you my soapbox.
IMHO what you are trying to do is the "Right Thing" (tm).
what address space are you use in your setup? is the /48 also shared by the host like in my setup? is it a GUA /48 or a ULA /48? how is routing happening, is it just you enabling one of the forwarding sysctl's (and doing SNAT or NPT in the case of non-GUA addresses)
I never NAT IPv6 (unless forced to, e.g. k8s)
I do both, obviously not at the same time. Some provides give you a routeable /48 or /56 for use with e.g. podman
or docker
or to delegate to your own LAN. Others just give you a /64 and you need to subnet it yourself. ULA works too, but never outside your e.g. campus or cluster.
For any relevant host interfaces i.e. eth0 (but there may be more), how are the IPv6 addresses obtained? e.g. where did 2a03:8c00:1a:8::/64
come from? You mentioned you were assigned a range, but it's not clear if that was a /64 or e.g. /48.
What is the IPv6 address/prefix for eth0 (assuming eth0 is the relevant interface). From above, I don't see any IPv6 address on eth0.
Do any of the interfaces receive IPv6 addresses via DHCPv6? Is anything upstream sending RA's? Any of these will get the kernel doing "stuff" depending on how your host is setup (explicitly or defaults.)
If CNI doesn't claim to configure the host for external v6 reachability of containers, then Podman presently doesn't make any such claims. I'd have to verify against Docker to see if we should be aiming to do so.
this /64 was given to me by my provider (their router is on 2a03:8c00:1a:6::1), i can ping this fine and it looks like SLAAC is used to configure the 2a03:8c00:8::/64 address range that ended up on my eth0. i statically assigned those IPv6 settings in my /etc/sysconfig/network-scripts/ifcfg-eth0 file (along with IPv4 static address).
I'm not at the level of doing port forwarding, but in the IPv4 case, i can run a container and attach to it and instantly have internet connectivity (e.g. the 10.88.0.1 automatically routes to the rest of the internet, doing NAT).. However with IPv6, i can't even ping my eth0's IPv6 (slaac assigned) address.. and i have that forwarding sysctl for ipv6 set to 1..
it's as if the cni-podman0 bridge not actually connected to the host's IPv6 network or something, since i can't ping anything outside of that.. i am not sure if that's possible if the bridge is supposed the be purely layer-2..
as far as the default route being dropped, that's the most crucial thing i'd need to figure out what is causing that..
this /64 was given to me by my provider (their router is on 2a03:8c00:1a:6::1), i can ping this fine and it looks like SLAAC is used to configure the 2a03:8c00:8::/64 address range that ended up on my eth0. i statically assigned those IPv6 settings in my /etc/sysconfig/network-scripts/ifcfg-eth0 file (along with IPv4 static address).
Did your provider give you a static IPv6 address? If not, SLAAC will suffice, but... don't put that in your ifcfg-eth0
file. The address is SLAAutoConfigured. It can change each time. All (at least dynamic) IPv6 addresses have a lifetime.
Did the provider delegate another prefix to you for use on e.g. cni-podman0? Otherwise they will have no idea how to route to you.
Check your math. The provider supplied /64 bits 2a03:8c00:1a:6
you said that for eth0 you are using
"looks like SLAAC is used to configure"
2a03:8c00:8::/64
those are different 64 bit prefixes. SLAAC should only come up with the lower 64 bits from the 2a03:8c00:1a:6
prefix for the eth0 interface. The other prefix is either delegated to you (and your provider knows how to route to it) or belongs to someone else; so your provider will not route to you any packets destined to that prefix.
However with IPv6, i can't even ping my eth0's IPv6 (slaac assigned) address.. and i have that forwarding sysctl for ipv6 set to 1..
We might not be there yet, but you'll need to let ip6tables
FORWARD packets between interfaces eventually. Setting the default policy to ACCEPT, for now, should ensure that the firewall isn't dropping.
$ sudo ip6tables -nvL FORWARD
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
as far as the default route being dropped, that's the most crucial thing i'd need to figure out what is causing that..
Once we know each interface behaves, we'll worry about routes. From the above it looks like the interfaces don't have prefixes assigned properly.
per @mheon:
If CNI doesn't claim to configure the host for external v6 reachability of containers, then Podman presently doesn't make any such claims.
this statement seem at odds with:
per @mccv1r0:
An IPv6 client (
nc -6
) from quarantine location has native IPv6 support. It connects via public Internet to podman container on "podman" bridge running on linode VM, which provides a /48 for all my podman container. I use this all the time so I know it works in general.
how can i do this too? :-) what does your config files/versions/etc look like?
how can i do this too? :-) what does your config files/versions/etc look like?
podman
didn't do any of it. Nor did docker
or lxc
before that. Long before containers or VM's even my home Unix box just uses eth0 (to ISP) and ethX (to local networks) and yea, a podman
network. This is just basic networking (L3 routing specifically.) There is nothing IPv6 specific, everything can be done with IPv4 as well if you have routable IPv4 prefixes. Most don't which is why we have that other plague NAT.
On some systems I manually configured Linux route IPv6 packets... on other I run routing daemons. When I added the second NIC, eth1 and plugged it into a L2 switch (which is analogous to brctl addbr XXX
the host needed to be configured to route packets.
Did your provider give you a static IPv6 address? If not, SLAAC will suffice, but... don't put that in your
ifcfg-eth0
file.
Ok good point, i cleaned up my ifcfg-eth0 file (removed address specifics and let SLAAC do it's thing).. only thing in my ifcfg-eth0 related to IPv6 now (per this redhat blog post) is:
$ grep IPV6 ifcfg-eth0
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
and ipv6 verified to work again without hard coding static IPv6 addresses/ranges.. this hasn't had any effect on my default route being dropped when i start a podman container however :-(
Did the provider delegate another prefix to you for use on e.g. cni-podman0? Otherwise they will have no idea how to route to you.
no i've only been given one /64 (i'm ~in the process of switching~ waiting on my provider to give me a /48 to have more flexibility, but that isn't relevant to this issue for now)
Check your math. The provider supplied /64 bits
2a03:8c00:1a:6
you said that for eth0 you are using
the default router i was told was at 2a03:8c00:1a:6::1, and that my range was as below
"looks like SLAAC is used to configure"
2a03:8c00:8::/64
those are different 64 bit prefixes. SLAAC should only come up with the lower 64 bits from the
2a03:8c00:1a:6
prefix for the eth0 interface. The other prefix is either delegated to you (and your provider knows how to route to it) or belongs to someone else; so your provider will not route to you any packets destined to that prefix.
I think the router is at 6::1 and i'm supposed to be on 8::/64, but i see what you'er saying and you're right, i don't really need to know the router 6::1 addr if i'm using SLAAC since that gave me the default router of the link-local address of the router interface anyway
However with IPv6, i can't even ping my eth0's IPv6 (slaac assigned) address.. and i have that forwarding sysctl for ipv6 set to 1..
We might not be there yet, but you'll need to let
ip6tables
FORWARD packets between interfaces eventually. Setting the default policy to ACCEPT, for now, should ensure that the firewall isn't dropping.
I've now also set ip6tables policy to ACCEPT on both the INPUT and FORWARD chains, and i still can't ping my eth0's GUA from within the container.. :-(
as far as the default route being dropped, that's the most crucial thing i'd need to figure out what is causing that..
Once we know each interface behaves, we'll worry about routes. From the above it looks like the interfaces don't have prefixes assigned properly.
the interface prefix assignments are now correct (hands off using SLAAC vs hardcoding static entries.. have a lot of IPv4 legacy thinking i need to undo seems)..
so what's happening to my default route :-(
how can i do this too? :-) what does your config files/versions/etc look like?
podman
didn't do any of it. Nor diddocker
orlxc
before that. Long before containers or VM's even my home Unix box just uses eth0 (to ISP) and ethX (to local networks) and yea, apodman
network.
i think i'm being misunderstood.. somehow you are able to accomplish this...
An IPv6 client (
nc -6
) from quarantine location has native IPv6 support. It connects via public Internet to podman container on "podman" bridge running on linode VM, which provides a /48 for all my podman container. I use this all the time so I know it works in general.
without any configuration file changes or custom settings? :-) that's what i'm asking for please
There are topology specific setting that need to take place. What works with SLAAC doesn't (necessarily) work with dhcpv6 and/or static or combination.
I'm not convinced your current prefixes are right. Regardless, if you used ULA for all your internal traffic, you should be able to get inside container to reach IPv6 address of eth0
(assuming firewall permits the traffic.
Assuming that the link to your provider is eth0
try adding these:
sudo sysctl -w net.ipv6.conf.eth0.accept_ra=2
sudo sysctl -w net.ipv6.conf.all.forwarding=1
sudo sysctl -w net.ipv6.conf.eth0.accept_ra_defrtr=1
sudo sysctl -w net.ipv6.conf.eth0.router_solicitations=1
You'll need accept_ra=2 so that the kernel doesn't mess with the routing table.
of those sysctl's, net.ipv6.conf.eth0.accept_ra
was set to 1, i set it to 2, and net.ipv6.conf.eth0.router_solicitations
was set to 3, i set it to 1.. the middle two were already those values..
ipv6 on host still stops working if i start a podman container (and i can't ping anything beyond the cni-podman0 LL/GUA addr)..
curiously, if i manually re-add the default route, the host ipv6 starts working, so it's definitely that which is causing ipv6 to (what i have been calling) "stop working" on my host.. however, when i manually add back the route that was deleted, then starting the container does not affect ipv6 on the host afterwards.. not sure what to make of that.. is there something wonky with my config in the (reset) "clean slate" that causes the default route to be dropped initially (but not re-dropped after i manually add it back in)?
Regardless, if you used ULA for all your internal traffic you should be able to get inside container to reach IPv6 address of eth0 (assuming firewall permits the traffic.
and i've tried different ranges inside the 87-podman-bridge.conflist
including LL as well as ULA ranges.. any mention of IPv6 inside that file regardless of address types being used have caused IPv6 to stop working :-( it's only now i see that re-adding the default route manually afterwards keeps it from being deleted again..
curiously, if i manually re-add the default route,
If your provider is sending RA's, the kernel should detects the RA and add the default route for you.
$ ip -6 route show
::1 dev lo proto kernel metric 256 pref medium
[deleted]
fe80::/64 dev xxx proto kernel metric 101 pref medium
fe80::/64 dev eth0 proto kernel metric 102 pref medium
fe80::/64 dev vnet0 proto kernel metric 256 pref medium
default via fe80::1a8b:9dff:fed4:822 dev eth0 proto ra metric 1024 expires 1797sec hoplimit 64 pref medium
$ sudo ip -6 route delete default
$ ip -6 route show
::1 dev lo proto kernel metric 256 pref medium
[deleted]
fe80::/64 dev xxx proto kernel metric 101 pref medium
fe80::/64 dev eth0 proto kernel metric 102 pref medium
fe80::/64 dev vnet0 proto kernel metric 256 pref medium
[mcc@wan2 ~]$ ip -6 route show
::1 dev lo proto kernel metric 256 pref medium
[deleted]
fe80::/64 dev xxx proto kernel metric 101 pref medium
fe80::/64 dev eth0 proto kernel metric 102 pref medium
fe80::/64 dev vnet0 proto kernel metric 256 pref medium
default via fe80::1a8b:9dff:fed4:822 dev eth0 proto ra metric 1024 expires 1798sec hoplimit 64 pref medium
[mcc@wan2 ~]$
If I didn't manually delete it, the RA received by eth0 would have refreshed the timeout of the existing entry.
AFAICT there are things not quite right about your setup on eth0
or the network it attaches to? Are you receiving Route Advertisements? Make sure your firewall isn't blocking them.
This has nothing to do with podman, docker, libvirt. None of them should be touching the host interfaces or the routes. CNI enters the network namespace of the container and runs commands in that namespace. The IPv6 config in 87-podman-bridge.conflist
should only be setting the default route inside each container started, and it should be set the default route nexthop to the IPv6 address of (in your case) cni-podman0
.
i've confirmed my firewall is wide-open, and i'm receiving RA's:
$ sudo tcpdump -vXni eth0 icmp6 and ip6[40] == 134
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
08:35:51.960578 IP6 (class 0xe0, hlim 255, next-header ICMPv6 (58) payload length: 64) fe80::21f:caff:feb2:ea40 > ff02::1: [icmp6 sum ok] ICMP6, router advertisement, length 64
hop limit 64, Flags [none], pref medium, router lifetime 1800s, reachable time 0ms, retrans time 0ms
source link-address option (1), length 8 (1): 00:1f:ca:b2:ea:40
mtu option (5), length 8 (1): 1500
prefix info option (3), length 32 (4): 2a03:8c00:1a:8::/64, Flags [onlink, auto], valid time 2592000s, pref. time 604800s
0x0000: 6e00 0000 0040 3aff fe80 0000 0000 0000 n....@:.........
0x0010: 021f caff feb2 ea40 ff02 0000 0000 0000 .......@........
0x0020: 0000 0000 0000 0001 8600 fc59 4000 0708 ...........Y@...
0x0030: 0000 0000 0000 0000 0101 001f cab2 ea40 ...............@
0x0040: 0501 0000 0000 05dc 0304 40c0 0027 8d00 ..........@..'..
0x0050: 0009 3a80 0000 0000 2a03 8c00 001a 0008 ..:.....*.......
0x0060: 0000 0000 0000 0000 ........
^C
1 packet captured
8 packets received by filter
0 packets dropped by kernel
i've set up a wrapper script for host-local plugin and ran it through strace, nothing unusual (at least nothing that would lead me to see why we're dropping the default ipv6 route).. it's strange because for some reason with all the redirections i'm doing to capture stdout and stderr in the wrapper script, the host-local process does not end up exitting (even tho i see several exit's in the strace output).. ping continue to work..
There are two default routes. At this point I don't know which one we're talking abut.
ip mon
earlier (and iirc our discussion re RA) were about the default route in the host network namespace that should egress out eth0 on the host to your provider. Since you're using SLACC this route should should be set by the kernel when RA is received. I've shown above that even if something deletes this, the kernel will add it back when the next RA is received.
The output re host-local only pertains to the podman
container's network namespace. The two default routes are different from each other. Whatever the host-route .host-local-out.19837
is, it's just json.
Does 87-podman-bridge.conflist
set "isGateway": true
? You don't show the entire file so I can't check myself.
Your conflist doesn't set "gw":"xxx:xxx:xxx:xxx::x", just
"dst":"::/0". if
isGateway` is set you don't have to, it will be done for you (at least as of):
$ podman version
Version: 1.8.0
What does ip -6 route show
say inside the container? Get the output from ip addr show
inside the container too.
There are two default routes. At this point I don't know which one we're talking abut.
ip mon
earlier (and iirc our discussion re RA) were about the default route in the host network namespace that should egress out eth0 on the host to your provider.
Apologies, you're spot on tho as usually when i refer to the default route it's almost always from the host perspective, the IPv6 one that's getting deleted on the host, when i launch a podman container.. The default route within the container i've never actually checked yet (have done so below), because the host connectivity is the main focus as this has higher outage potential.
I've shown above that even if something deletes this, the kernel will add it back when the next RA is received.
as you can see above, after my default route somehow gets deleted, the default route is not added back automatically, even though my firewall is wide open AND i confirmed they are coming in via tcpdump, and i've ensured the sysctl's are set as mentioned..
The output re host-local only pertains to the
podman
container's network namespace. The two default routes are different from each other. Whatever the host-route.host-local-out.19837
is, it's just json.
i see, this is all overall a big educational opportunity to learn more about the way CNI functions and its components..
Does
87-podman-bridge.conflist
set"isGateway": true
?
it does have this.. here it is in full
1 {
2 "cniVersion": "0.4.0",
3 "name": "podman",
4 "plugins": [
5 {
6 "type": "bridge",
7 "bridge": "cni-podman0",
8 "isGateway": true,
9 "ipMasq": true,
10 "ipam": {
11 "type": "host-local",
12 "routes": [
13 { "dst": "0.0.0.0/0" },
14 { "dst": "::/0" }
15 ],
16 "ranges": [
17 [
18 {
19 "subnet": "10.88.0.0/16",
20 "gateway": "10.88.0.1"
21 }
22 ],
23 [
24 {
25 "subnet": "fd03:8c00:1a:8::/64",
26 "rangeStart": "fd03:8c00:1a:8::100",
27 "rangeEnd": "fd03:8c00:1a:8::200"
28 }
29 ]
30 ]
31 }
32 },
33 {
34 "type": "portmap",
35 "capabilities": {
36 "portMappings": true
37 }
38 },
39 {
40 "type": "firewall"
41 },
42 {
43 "type": "tuning"
44 }
45 ]
46 }
(also here i've tried to set the network range to a ULA (simply with s/2a03/fd03/g
)
You don't show the entire file so I can't check myself.
sorry i haven't included it in full earlier (i included a diff from what is distributed with the podman package), but is also why i was hoping to see what someone's looks like who has this working, in case there was something glaringly wrong with mine.. i haven't found any official example of what this file should look like other than the host-local plugin github page but that one does not talk about how the address space being used relates to how the host is set up (in case it makes a difference?)
and as such, really as i'm not super familiar with the intricacies of all the CNI does, what i've been forced to do without a canonical reference, is essentially throwing a bunch of poo at the walls and trying to see what sticks as i try different things and test different theories.. I would also have just blamed the host and the network too, except as mentioned in my last update, i've gotten the same behaviour reproduced on a new blank VM with same versions, same 87-podman-bridge.conflist, everything same EXCEPT the network (at home this time instead of my colo box)..
Your conflist doesn't set
"gw":"xxx:xxx:xxx:xxx::x", just
"dst":"::/0". if
isGateway` is set you don't have to, it will be done for you (at least as of):
thanks for that clarification
$ podman version Version: 1.8.0
$ podman version
Version: 1.9.0
What does
ip -6 route show
say inside the container?
/ # ip -6 route show
fd03:8c00:1a:8::/64 dev eth0 metric 256
fe80::/64 dev eth0 metric 256
default via fd03:8c00:1a:8::1 dev eth0 metric 1024
unreachable default dev lo metric -1 error -101
ff00::/8 dev eth0 metric 256
unreachable default dev lo metric -1 error -101
Get the output from
ip addr show
inside the container too.
/ # ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if90: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 0e:7d:9e:cd:1d:f1 brd ff:ff:ff:ff:ff:ff
inet 10.88.0.88/16 brd 10.88.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fd03:8c00:1a:8::101/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::c7d:9eff:fecd:1df1/64 scope link
valid_lft forever preferred_lft forever
as you can imagine, i'm pretty much at the end of my rope and out of ideas..
Hi, I just wanted to quickly chime in and say a few things. First, IPv6 into my containers works without a problem! Please see my current cni config;
{
"cniVersion": "0.4.0",
"name": "podman",
"plugins": [
{
"type": "bridge",
"bridge": "cni-podman0",
"isGateway": true,
"ipMasq": false,
"ipam": {
"type": "host-local",
"routes": [{ "dst": "0.0.0.0/0" }, {"dst": "2000::/3" }],
"ranges": [
[
{
"subnet": "10.88.0.0/16",
"gateway": "10.88.0.1"
}
],
[
{
"subnet": "2601:601:9f80:3c4f::/64",
"gateway": "2601:601:9f80:3c4f::1"
}
]
]
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
},
{
"type": "firewall"
},
{
"type": "tuning"
}
]
}
Please be aware you have to configure ip6tables (or whatever your OS firewall is) for forwarding.
Second issue is that I can't statically set an IPv6 address with the --ip option. If I remove the following check, and re-compile, statically setting the IPv6 address works great! That check is;
https://github.com/containers/libpod/blob/v1.9/pkg/spec/namespaces.go#L85-L87
else if ip.To4() == nil {
return nil, errors.Wrapf(define.ErrInvalidArg, "%s is not an IPv4 address", c.IPAddress)
}
Then I can run the following command and get this output;
./podman run -ti --rm --ip 2601:601:9f80:3c4f::2 alpine /bin/sh
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:93:42:B6:89:30
inet addr:10.88.0.12 Bcast:10.88.255.255 Mask:255.255.0.0
inet6 addr: 2601:601:9f80:3c4f::2/64 Scope:Global
inet6 addr: fe80::93:42ff:feb6:8930/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:816 (816.0 B) TX bytes:814 (814.0 B)
I'm currently using Fedora CoreOS 31
$ rpm -qa podman
podman-1.9.2-1.fc31.x86_64
$ rpm -qa containernetworking-plugins
containernetworking-plugins-0.8.6-1.fc31.x86_64
There should be a dedicated flag for static IPv6 addresses (--ip6
) but we haven't wired it in yet. It's actually a simple change - I'll see about getting it landed in master tomorrow.
Tried @dsbaha 's cni config on my test centos 7 host (config with my ipv6 address here), still dropping default network on the host as soon as the container is started :-(
hasn't ANYONE got a centos 7 host, with ipv6 connectivity that could try to get this working??
A friendly reminder that this issue had no activity for 30 days.
@mheon What is the scoop on this one?
configure ip6tables
Is there any specific settings need to configured in ip6tables. Because when i tried to remove container having ipv6 ip it is giving bellow error -
ERRO[0000] Error deleting network: running [/sbin/ip6tables -t nat -D POSTROUTING -s fd00::1:8:a/112 -j CNI-355124625f5423fd129aa828 -m comment --comment name: "demo" id: "23f508256866835fcedffb2fdd0f1436f3a47e5e8c99115004a8034b68fa62df" --wait]: exit status 1: iptables: Bad rule (does a matching rule exist in that chain?). ERRO[0000] Error while removing pod from CNI network "demo": running [/sbin/ip6tables -t nat -D POSTROUTING -s fd00::1:8:a/112 -j CNI-355124625f5423fd129aa828 -m comment --comment name: "demo" id: "23f508256866835fcedffb2fdd0f1436f3a47e5e8c99115004a8034b68fa62df" --wait]: exit status 1: iptables: Bad rule (does a matching rule exist in that chain?). ERRO[0000] unable to cleanup network for container 23f508256866835fcedffb2fdd0f1436f3a47e5e8c99115004a8034b68fa62df: "error tearing down CNI namespace configuration for container 23f508256866835fcedffb2fdd0f1436f3a47e5e8c99115004a8034b68fa62df: running [/sbin/ip6tables -t nat -D POSTROUTING -s fd00::1:8:a/112 -j CNI-355124625f5423fd129aa828 -m comment --comment name: \"demo\" id: \"23f508256866835fcedffb2fdd0f1436f3a47e5e8c99115004a8034b68fa62df\" --wait]: exit status 1: iptables: Bad rule (does a matching rule exist in that chain?).\n" 23f508256866835fcedffb2fdd0f1436f3a47e5e8c99115004a8034b68fa62df.
Please suggest
Looks like @apoos-maximus is going to work on part of this
@mheon any progress on our IPV6 support?
--ipv6
has not landed yet
A friendly reminder that this issue had no activity for 30 days.
A friendly reminder that this issue had no activity for 30 days.
A friendly reminder that this issue had no activity for 30 days.
To humans and systems testing in IPv6 only networks in podman, in fedora33 current box the solution of @dsbaha works for IPv6 and portMappings with root containers and the host IPv6 network does not get black-holed.
@mheon the --ipv6 flag maybe has been forgotten? I will learn go language in order to contribute to this project.
But ipv6 for rootless is another problem and IPv6 is not working in rootless, maybe NAT issues or the tup/tap adapter.
@ricardo-rod I'm working on --ipv6
now, actually. Unfortunately, it's turning out to be a much larger task than I was hoping for - to support proper dual-stack solutions, we need to rewrite some parts of the library we use for calling the CNI network stack so we can support static v4 and v6 addresses simultaneously.
A friendly reminder that this issue had no activity for 30 days.
@Luap99 @mheon Does this require the network redesign?
Yes
A friendly reminder that this issue had no activity for 30 days.
A friendly reminder that this issue had no activity for 30 days.
@rhatdan Could you remove the stale label?
Done.
Hi, I had a lot of headache making IPv6 happen with Podman. I started with rootless and gave up, thinking rootful would be quick and simple. But I was wrong. I went into this problem, the IPv6 connectivity of the host machine just broke as the container starts. I can publish the port in IPv6 alright, and connect to the published port from the host alright, but without network access, clients just can't connect. Are there any workarounds right now? Or do I have to ditch it and search for other solutions? Thanks!
A friendly reminder that this issue had no activity for 30 days.
@rhatdan Could you remove the stale label?
A friendly reminder that this issue had no activity for 30 days.
@rhatdan Could you remove the stale label?
@MartinX3 you just commenting on it seems to have removed the stale label...:^)
@rhatdan thank you for enhancing the bot :D
I would love to take credit for that, but someone else did it, we just take advantage of it.
How is the state of things here? I'm using CentOS Stream 9 with rootless containers. With IPv6 enabled on the host, containers can't use it. So not "out of the box" yet 😄
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Since I've gotten IPv6 connectivity recently set up to my CentOS 7 host, i wanted to start taking advantage and see how well Podman supports IPv6. This is as root (so not rootless).
After adding the relevant IPv6 section to
/etc/cni/net.d/87-podman-bridge.conflist
per the docs pages, then when starting a container, while a ping running on the host starts responding withping: sendmsg: Network is unreachable
Steps to reproduce the issue:
add a 2nd array to original
/etc/cni/net.d/87-podman-bridge.conflist
as.plugins[0].ipam.ranges[1]
(diff included below)starting a ping on host:
ping6 ipv6.google.com
, responses start coming in as expected..starting a (as root) container:
sudo podman run -it --rm docker.io/library/alpine:3.11
see ping starts failing with errors:
ping: sendmsg: Network is unreachable
additionally, container cannot reach the internet either and the only way to fix host ipv6 connectivity is to
systemctl restart network
Describe the results you received:
a.) the host ipv6 networking is greatly affected (in effect being completely broken), this is really bad as it could cause an outage
b.) the container still has no ipv6 connectivity either
Describe the results you expected:
container should simply be able to reach the ipv6 network and the host ipv6 networking should not be affected at all!
Additional information you deem important (e.g. issue happens only occasionally):
issue always happens
a diff of what the changes i made to 87-podman-bridge.conflist (adding my ipv6 GUA)
``` --- /var/tmp/orig-87-podman-bridge.conflist 2020-05-07 11:50:15.695848051 +0000 +++ 87-podman-bridge.conflist 2020-05-07 11:53:37.681314127 +0000 @@ -9,13 +9,20 @@ "ipMasq": true, "ipam": { "type": "host-local", - "routes": [{ "dst": "0.0.0.0/0" }], + "routes": [{ "dst": "0.0.0.0/0", "dst": "::/0" }], "ranges": [ [ { "subnet": "10.88.0.0/16", "gateway": "10.88.0.1" } + ], + [ + { + "subnet": "2a03:8c00:1a:8::/64", + "rangeStart": "2a03:8c00:1a:8::100", + "rangeEnd": "2a03:8c00:1a:8::200" + } ] ] } ``` **note**: this happens even if i _don't_ update the routes section (though ultimately i'd like my container reachable on the internet).output of running podman with --log-level=debug
``` $ sudo podman run --log-level=debug -it --rm docker.io/library/alpine:3.11 DEBU[0000] Found deprecated file /usr/share/containers/libpod.conf, please remove. Use /etc/containers/containers.conf to override defaults. DEBU[0000] Reading configuration file "/usr/share/containers/libpod.conf" DEBU[0000] Reading configuration file "/etc/containers/containers.conf" DEBU[0000] Merged system config "/etc/containers/containers.conf": &{{[] [] container-default [] host [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_S ETUID CAP_SYS_CHROOT] [] [nproc=32768:32768] [] [] [] false [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] false false false private k8s-file -1 bridge false 2048 private /usr/share/containers/seccomp.json 65536k private private 65536} { false systemd [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/curre nt-system/sw/bin/conmon] ctrl-p,ctrl-q true /var/run/libpod/events/events.log journald [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/loc al/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime] kata-fc:[/usr/bin/kata-fc] kata-qemu:[/usr/bin/kata-qemu] kata-runtime:[/usr/bin/kata-runtime] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing [] [crun runc] [crun] { false false false true true true} false 3 /var/lib/containers/storage/libpod 10 /var/run/libpod /var/lib/containers/storage/volumes} {[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman /etc/cni/net.d/}} DEBU[0000] Using conmon: "/usr/libexec/podman/conmon" DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /var/lib/containers/storage DEBU[0000] Using run root /var/run/containers/storage DEBU[0000] Using static dir /var/lib/containers/storage/libpod DEBU[0000] Using tmp dir /var/run/libpod DEBU[0000] Using volume path /var/lib/containers/storage/volumes DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is not being used DEBU[0000] cached value indicated that native-diff is usable DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false DEBU[0000] Initializing event backend journald WARN[0000] Error initializing configured OCI runtime kata-runtime: no valid executable found for OCI runtime kata-runtime: invalid argument WARN[0000] Error initializing configured OCI runtime kata-qemu: no valid executable found for OCI runtime kata-qemu: invalid argument WARN[0000] Error initializing configured OCI runtime kata-fc: no valid executable found for OCI runtime kata-fc: invalid argument DEBU[0000] using runtime "/usr/bin/runc" WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist WARN[0000] Default CNI network name podman is unchangeable DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]docker.io/library/alpine:3.11" DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" DEBU[0000] exporting opaque data as blob "sha256:f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" DEBU[0000] Using bridge netmode DEBU[0000] No hostname set; container's hostname will default to runtime default DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json" DEBU[0000] created OCI spec and options for new container DEBU[0000] Allocated lock 2 for container f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" DEBU[0000] exporting opaque data as blob "sha256:f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" DEBU[0000] created container "f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217" DEBU[0000] container "f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217" has work directory "/var/lib/containers/storage/overlay-containers/f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217/userdata" DEBU[0000] container "f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217" has run directory "/var/run/containers/storage/overlay-containers/f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217/userdata" DEBU[0000] New container created "f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217" DEBU[0000] container "f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217" has CgroupParent "machine.slice/libpod-f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217.scope" DEBU[0000] Handling terminal attach DEBU[0000] overlay: mount_data=nodev,lowerdir=/var/lib/containers/storage/overlay/l/NWJTW4RWZL2KWL2W3DRBEJAYS7,upperdir=/var/lib/containers/storage/overlay/8c5a0934b63847336aa0cdc69fe37aad2eb8b373bac7375ca20f766e6352e0d2/diff,workdir=/var/lib/containers/storage/overlay/8c5a0934b63847336aa0cdc69fe37aad2eb8b373bac7375ca20f766e6352e0d2/work DEBU[0000] mounted container "f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217" at "/var/lib/containers/storage/overlay/8c5a0934b63847336aa0cdc69fe37aad2eb8b373bac7375ca20f766e6352e0d2/merged" DEBU[0000] Created root filesystem for container f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 at /var/lib/containers/storage/overlay/8c5a0934b63847336aa0cdc69fe37aad2eb8b373bac7375ca20f766e6352e0d2/merged DEBU[0000] Made network namespace at /var/run/netns/cni-24d1f8fc-6e09-002a-59d9-c83225881d60 for container f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 INFO[0000] About to add CNI network lo (type=loopback) INFO[0000] Got pod network &{Name:happy_sutherland Namespace:happy_sutherland ID:f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 NetNS:/var/run/netns/cni-24d1f8fc-6e09-002a-59d9-c83225881d60 Networks:[] RuntimeConfig:map[podman:{IP: MAC: PortMappings:[] Bandwidth:here are some log entries from /var/log/messages when starting the container
``` May 7 12:05:02 shell podman: 2020-05-07 12:05:02.551688071 +0000 UTC m=+0.201037333 container create f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 (image=docker.io/library/alpine:3.11, name=happy_sutherland) May 7 12:05:02 shell kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready May 7 12:05:02 shell kernel: IPv6: ADDRCONF(NETDEV_UP): vethf40dd2e6: link is not ready May 7 12:05:02 shell kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf40dd2e6: link becomes ready May 7 12:05:02 shell kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 7 12:05:02 shell kernel: cni-podman0: port 1(vethf40dd2e6) entered blocking state May 7 12:05:02 shell kernel: cni-podman0: port 1(vethf40dd2e6) entered disabled state May 7 12:05:02 shell kernel: device vethf40dd2e6 entered promiscuous mode May 7 12:05:02 shell kernel: cni-podman0: port 1(vethf40dd2e6) entered blocking state May 7 12:05:02 shell kernel: cni-podman0: port 1(vethf40dd2e6) entered forwarding state May 7 12:05:04 shell systemd: Started libpod-conmon-f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217.scope. May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409additionally, this is the output of running `ip monitor` that shows all network related changes
``` fe80::21f:caff:feb2:ea40 dev eth0 lladdr 00:1f:ca:b2:ea:40 router REACHABLE lladdr ba:04:ac:9b:88:1d PERMANENT 43: cni-podman0:for the heck of it, the above three interlaced along with a flood ping (interval-time=10ms) at my gateway, to indicate exactly _when_ the ipv6 functionality on the host breaks
``` $ sudo tail -f /var/log/messages & sudo ip monitor & sudo ping6 -i 0.01 2a03:8c00:1a:6::1 & sleep 0.5 && sudo podman run --log-level=debug -d docker.io/library/alpine:3.11 && sudo pkill -9 ping6 [1] 20237 [2] 20238 [3] 20239 PING 2a03:8c00:1a:6::1(2a03:8c00:1a:6::1) 56 data bytes 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=1 ttl=64 time=0.675 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=2 ttl=64 time=0.359 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=3 ttl=64 time=0.448 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=4 ttl=64 time=0.517 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=5 ttl=64 time=0.877 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=6 ttl=64 time=0.434 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=7 ttl=64 time=0.465 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=8 ttl=64 time=0.515 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=9 ttl=64 time=0.640 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=10 ttl=64 time=0.411 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=11 ttl=64 time=2.59 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=12 ttl=64 time=0.416 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=13 ttl=64 time=0.405 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=14 ttl=64 time=0.966 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=15 ttl=64 time=0.540 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=16 ttl=64 time=0.607 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=17 ttl=64 time=0.532 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=18 ttl=64 time=0.544 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=19 ttl=64 time=0.510 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=20 ttl=64 time=0.538 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=21 ttl=64 time=0.576 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=22 ttl=64 time=0.560 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=23 ttl=64 time=0.547 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=24 ttl=64 time=0.546 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=25 ttl=64 time=0.447 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=26 ttl=64 time=0.493 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=27 ttl=64 time=0.553 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=28 ttl=64 time=0.520 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=29 ttl=64 time=0.647 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=30 ttl=64 time=0.681 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=31 ttl=64 time=0.650 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=32 ttl=64 time=0.620 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=33 ttl=64 time=0.641 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=34 ttl=64 time=0.582 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=35 ttl=64 time=0.498 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=36 ttl=64 time=0.634 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=37 ttl=64 time=0.630 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=38 ttl=64 time=0.651 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=39 ttl=64 time=0.966 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=40 ttl=64 time=0.551 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=41 ttl=64 time=0.438 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=42 ttl=64 time=0.423 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=43 ttl=64 time=0.498 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=44 ttl=64 time=0.407 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=45 ttl=64 time=0.577 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=46 ttl=64 time=0.517 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=47 ttl=64 time=0.494 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=48 ttl=64 time=0.489 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=49 ttl=64 time=0.432 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=50 ttl=64 time=0.377 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=51 ttl=64 time=0.459 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=52 ttl=64 time=0.599 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=53 ttl=64 time=0.842 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=54 ttl=64 time=0.512 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=55 ttl=64 time=0.557 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=56 ttl=64 time=0.523 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=57 ttl=64 time=1.94 ms DEBU[0000] Found deprecated file /usr/share/containers/libpod.conf, please remove. Use /etc/containers/containers.conf to override defaults. DEBU[0000] Reading configuration file "/usr/share/containers/libpod.conf" DEBU[0000] Reading configuration file "/etc/containers/containers.conf" DEBU[0000] Merged system config "/etc/containers/containers.conf": &{{[] [] container-default [] host [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] [] [nproc=32768:32768] [] [] [] false [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] false false false private k8s-file -1 bridge false 2048 private /usr/share/containers/seccomp.json 65536k private private 65536} {false systemd [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] ctrl-p,ctrl-q true /var/run/libpod/events/events.log journald [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime] kata-fc:[/usr/bin/kata-fc] kata-qemu:[/usr/bin/kata-qemu] kata-runtime:[/usr/bin/kata-runtime] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing [] [crun runc] [crun] {false false false true true true} false 3 /var/lib/containers/storage/libpod 10 /var/run/libpod /var/lib/containers/storage/volumes} {[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman /etc/cni/net.d/}} 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=58 ttl=64 time=0.637 ms DEBU[0000] Using conmon: "/usr/libexec/podman/conmon" DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /var/lib/containers/storage DEBU[0000] Using run root /var/run/containers/storage DEBU[0000] Using static dir /var/lib/containers/storage/libpod DEBU[0000] Using tmp dir /var/run/libpod DEBU[0000] Using volume path /var/lib/containers/storage/volumes DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is not being used DEBU[0000] cached value indicated that native-diff is usable DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false DEBU[0000] Initializing event backend journald WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument WARN[0000] Error initializing configured OCI runtime kata-runtime: no valid executable found for OCI runtime kata-runtime: invalid argument WARN[0000] Error initializing configured OCI runtime kata-qemu: no valid executable found for OCI runtime kata-qemu: invalid argument WARN[0000] Error initializing configured OCI runtime kata-fc: no valid executable found for OCI runtime kata-fc: invalid argument DEBU[0000] using runtime "/usr/bin/runc" 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=59 ttl=64 time=0.472 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=60 ttl=64 time=0.432 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=61 ttl=64 time=0.885 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=62 ttl=64 time=0.422 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=63 ttl=64 time=0.345 ms INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist WARN[0000] Default CNI network name podman is unchangeable DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]docker.io/library/alpine:3.11" DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" DEBU[0000] exporting opaque data as blob "sha256:f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=64 ttl=64 time=0.527 ms DEBU[0000] Using bridge netmode DEBU[0000] No hostname set; container's hostname will default to runtime default DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json" DEBU[0000] created OCI spec and options for new container 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=65 ttl=64 time=0.495 ms DEBU[0000] Allocated lock 3 for container b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" DEBU[0000] exporting opaque data as blob "sha256:f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" DEBU[0000] created container "b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503" 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=66 ttl=64 time=1.75 ms DEBU[0000] container "b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503" has work directory "/var/lib/containers/storage/overlay-containers/b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503/userdata" DEBU[0000] container "b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503" has run directory "/var/run/containers/storage/overlay-containers/b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503/userdata" 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=67 ttl=64 time=0.420 ms DEBU[0000] New container created "b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503" DEBU[0000] container "b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503" has CgroupParent "machine.slice/libpod-b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503.scope" May 7 12:18:33 shell podman: 2020-05-07 12:18:33.648266278 +0000 UTC m=+0.184733739 container create b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 (image=docker.io/library/alpine:3.11, name=quirky_sammet) DEBU[0000] overlay: mount_data=nodev,lowerdir=/var/lib/containers/storage/overlay/l/NWJTW4RWZL2KWL2W3DRBEJAYS7,upperdir=/var/lib/containers/storage/overlay/e1c605a7219566bf771a1127d2ce2e24f501f0292f2ff692d097c16b91d0a0ad/diff,workdir=/var/lib/containers/storage/overlay/e1c605a7219566bf771a1127d2ce2e24f501f0292f2ff692d097c16b91d0a0ad/work DEBU[0000] mounted container "b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503" at "/var/lib/containers/storage/overlay/e1c605a7219566bf771a1127d2ce2e24f501f0292f2ff692d097c16b91d0a0ad/merged" DEBU[0000] Created root filesystem for container b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 at /var/lib/containers/storage/overlay/e1c605a7219566bf771a1127d2ce2e24f501f0292f2ff692d097c16b91d0a0ad/merged DEBU[0000] Made network namespace at /var/run/netns/cni-ca167143-3d9a-09da-3cf4-1107a0e37197 for container b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 INFO[0000] About to add CNI network lo (type=loopback) 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=68 ttl=64 time=0.481 ms INFO[0000] Got pod network &{Name:quirky_sammet Namespace:quirky_sammet ID:b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 NetNS:/var/run/netns/cni-ca167143-3d9a-09da-3cf4-1107a0e37197 Networks:[] RuntimeConfig:map[podman:{IP: MAC: PortMappings:[] Bandwidth:per the above, the part that look of interest to me are when the actual pinging starts failing:
also output of `podman version`:
``` Version: 1.9.0 RemoteAPI Version: 1 Go Version: go1.13.6 Git Commit: d3d78010e8fd8483456db2873b0c30937113dab1-dirty Built: Wed Apr 29 22:21:53 2020 OS/Arch: linux/amd64 ``` **note**: i am running podman 1.9 but patched with #6025 but it should not make any difference as _this_ issue being discussed now is not using rootless mode.also output of `podman info --debug`:
``` debug: compiler: gc gitCommit: d3d78010e8fd8483456db2873b0c30937113dab1-dirty goVersion: go1.13.6 podmanVersion: 1.9.0 host: arch: amd64 buildahVersion: 1.14.8 cgroupVersion: v1 conmon: package: podman-1.9.0-1588198879.gited47046c.el7.x86_64 path: /usr/libexec/podman/conmon version: 'conmon version 2.0.7, commit: d3d78010e8fd8483456db2873b0c30937113dab1-dirty' cpus: 8 distribution: distribution: '"centos"' version: "7" eventLogger: journald hostname: shell idMappings: gidmap: - container_id: 0 host_id: 100 size: 1 - container_id: 1 host_id: 100000 size: 65536 uidmap: - container_id: 0 host_id: 1000 size: 1 - container_id: 1 host_id: 100000 size: 65536 kernel: 3.10.0-1127.el7.centos.plus.x86_64 memFree: 164622336 memTotal: 16655831040 ociRuntime: name: runc package: containerd.io-1.2.13-3.1.el7.x86_64 path: /usr/bin/runc version: |- runc version 1.0.0-rc10 commit: dc9208a3303feef5b3839f4323d9beb36df0a9dd spec: 1.0.1-dev os: linux rootless: true slirp4netns: executable: /usr/bin/slirp4netns package: slirp4netns-1.0.0-6.1.el7.x86_64 version: |- slirp4netns version 1.0.0 commit: a3be729152a33e692cd28b52f664defbf2e7810a libslirp: 4.2.0 swapFree: 0 swapTotal: 0 uptime: 65h 27m 4.83s (Approximately 2.71 days) registries: search: - registry.fedoraproject.org - registry.access.redhat.com - registry.centos.org - docker.io store: configFile: /home/cynikal/.config/containers/storage.conf containerStore: number: 7 paused: 0 running: 2 stopped: 5 graphDriverName: vfs graphOptions: {} graphRoot: /home/cynikal/.local/share/containers/storage graphStatus: {} imageStore: number: 8 runRoot: /run/user/1000 volumePath: /home/cynikal/.local/share/containers/storage/volumes ```Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
This is a libvirtd/KVM guest running CentOS 7 (whose hypervisor is a physical rackmount host)