containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.51k stars 2.39k forks source link

adding IPv6 section to 87-podman-bridge.conflist breaks host's ipv6 network access #6114

Closed aleks-mariusz closed 2 years ago

aleks-mariusz commented 4 years ago

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Since I've gotten IPv6 connectivity recently set up to my CentOS 7 host, i wanted to start taking advantage and see how well Podman supports IPv6. This is as root (so not rootless).

After adding the relevant IPv6 section to /etc/cni/net.d/87-podman-bridge.conflist per the docs pages, then when starting a container, while a ping running on the host starts responding with ping: sendmsg: Network is unreachable

Steps to reproduce the issue:

  1. add a 2nd array to original /etc/cni/net.d/87-podman-bridge.conflist as .plugins[0].ipam.ranges[1] (diff included below)

  2. starting a ping on host: ping6 ipv6.google.com, responses start coming in as expected..

  3. starting a (as root) container: sudo podman run -it --rm docker.io/library/alpine:3.11

  4. see ping starts failing with errors: ping: sendmsg: Network is unreachable

  5. additionally, container cannot reach the internet either and the only way to fix host ipv6 connectivity is to systemctl restart network

Describe the results you received:

a.) the host ipv6 networking is greatly affected (in effect being completely broken), this is really bad as it could cause an outage

b.) the container still has no ipv6 connectivity either

Describe the results you expected:

container should simply be able to reach the ipv6 network and the host ipv6 networking should not be affected at all!

Additional information you deem important (e.g. issue happens only occasionally):

issue always happens


a diff of what the changes i made to 87-podman-bridge.conflist (adding my ipv6 GUA) ``` --- /var/tmp/orig-87-podman-bridge.conflist 2020-05-07 11:50:15.695848051 +0000 +++ 87-podman-bridge.conflist 2020-05-07 11:53:37.681314127 +0000 @@ -9,13 +9,20 @@ "ipMasq": true, "ipam": { "type": "host-local", - "routes": [{ "dst": "0.0.0.0/0" }], + "routes": [{ "dst": "0.0.0.0/0", "dst": "::/0" }], "ranges": [ [ { "subnet": "10.88.0.0/16", "gateway": "10.88.0.1" } + ], + [ + { + "subnet": "2a03:8c00:1a:8::/64", + "rangeStart": "2a03:8c00:1a:8::100", + "rangeEnd": "2a03:8c00:1a:8::200" + } ] ] } ``` **note**: this happens even if i _don't_ update the routes section (though ultimately i'd like my container reachable on the internet).
output of running podman with --log-level=debug ``` $ sudo podman run --log-level=debug -it --rm docker.io/library/alpine:3.11 DEBU[0000] Found deprecated file /usr/share/containers/libpod.conf, please remove. Use /etc/containers/containers.conf to override defaults. DEBU[0000] Reading configuration file "/usr/share/containers/libpod.conf" DEBU[0000] Reading configuration file "/etc/containers/containers.conf" DEBU[0000] Merged system config "/etc/containers/containers.conf": &{{[] [] container-default [] host [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_S ETUID CAP_SYS_CHROOT] [] [nproc=32768:32768] [] [] [] false [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] false false false private k8s-file -1 bridge false 2048 private /usr/share/containers/seccomp.json 65536k private private 65536} { false systemd [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/curre nt-system/sw/bin/conmon] ctrl-p,ctrl-q true /var/run/libpod/events/events.log journald [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/loc al/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime] kata-fc:[/usr/bin/kata-fc] kata-qemu:[/usr/bin/kata-qemu] kata-runtime:[/usr/bin/kata-runtime] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing [] [crun runc] [crun] { false false false true true true} false 3 /var/lib/containers/storage/libpod 10 /var/run/libpod /var/lib/containers/storage/volumes} {[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman /etc/cni/net.d/}} DEBU[0000] Using conmon: "/usr/libexec/podman/conmon" DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /var/lib/containers/storage DEBU[0000] Using run root /var/run/containers/storage DEBU[0000] Using static dir /var/lib/containers/storage/libpod DEBU[0000] Using tmp dir /var/run/libpod DEBU[0000] Using volume path /var/lib/containers/storage/volumes DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is not being used DEBU[0000] cached value indicated that native-diff is usable DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false DEBU[0000] Initializing event backend journald WARN[0000] Error initializing configured OCI runtime kata-runtime: no valid executable found for OCI runtime kata-runtime: invalid argument WARN[0000] Error initializing configured OCI runtime kata-qemu: no valid executable found for OCI runtime kata-qemu: invalid argument WARN[0000] Error initializing configured OCI runtime kata-fc: no valid executable found for OCI runtime kata-fc: invalid argument DEBU[0000] using runtime "/usr/bin/runc" WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist WARN[0000] Default CNI network name podman is unchangeable DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]docker.io/library/alpine:3.11" DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" DEBU[0000] exporting opaque data as blob "sha256:f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" DEBU[0000] Using bridge netmode DEBU[0000] No hostname set; container's hostname will default to runtime default DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json" DEBU[0000] created OCI spec and options for new container DEBU[0000] Allocated lock 2 for container f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" DEBU[0000] exporting opaque data as blob "sha256:f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" DEBU[0000] created container "f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217" DEBU[0000] container "f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217" has work directory "/var/lib/containers/storage/overlay-containers/f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217/userdata" DEBU[0000] container "f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217" has run directory "/var/run/containers/storage/overlay-containers/f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217/userdata" DEBU[0000] New container created "f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217" DEBU[0000] container "f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217" has CgroupParent "machine.slice/libpod-f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217.scope" DEBU[0000] Handling terminal attach DEBU[0000] overlay: mount_data=nodev,lowerdir=/var/lib/containers/storage/overlay/l/NWJTW4RWZL2KWL2W3DRBEJAYS7,upperdir=/var/lib/containers/storage/overlay/8c5a0934b63847336aa0cdc69fe37aad2eb8b373bac7375ca20f766e6352e0d2/diff,workdir=/var/lib/containers/storage/overlay/8c5a0934b63847336aa0cdc69fe37aad2eb8b373bac7375ca20f766e6352e0d2/work DEBU[0000] mounted container "f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217" at "/var/lib/containers/storage/overlay/8c5a0934b63847336aa0cdc69fe37aad2eb8b373bac7375ca20f766e6352e0d2/merged" DEBU[0000] Created root filesystem for container f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 at /var/lib/containers/storage/overlay/8c5a0934b63847336aa0cdc69fe37aad2eb8b373bac7375ca20f766e6352e0d2/merged DEBU[0000] Made network namespace at /var/run/netns/cni-24d1f8fc-6e09-002a-59d9-c83225881d60 for container f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 INFO[0000] About to add CNI network lo (type=loopback) INFO[0000] Got pod network &{Name:happy_sutherland Namespace:happy_sutherland ID:f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 NetNS:/var/run/netns/cni-24d1f8fc-6e09-002a-59d9-c83225881d60 Networks:[] RuntimeConfig:map[podman:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}]} INFO[0000] About to add CNI network podman (type=bridge) DEBU[0001] [0] CNI result: &{0.4.0 [{Name:cni-podman0 Mac:ba:04:ac:9b:88:1d Sandbox:} {Name:vethf40dd2e6 Mac:fe:b5:65:09:90:a5 Sandbox:} {Name:eth0 Mac:ae:f7:f6:50:38:fb Sandbox:/var/run/netns/cni-24d1f8fc-6e09-002a-59d9-c83225881d60}] [{Version:4 Interface:0xc000163de8 Address:{IP:10.88.0.57 Mask:ffff0000} Gateway:10.88.0.1} {Version:6 Interface:0xc000163fa0 Address:{IP:2a03:8c00:1a:8::106 Mask:ffffffffffffffff0000000000000000} Gateway:2a03:8c00:1a:8::1}] [{Dst:{IP::: Mask:00000000000000000000000000000000} GW:}] {[] [] []}} DEBU[0001] /etc/system-fips does not exist on host, not mounting FIPS mode secret DEBU[0001] Setting CGroups for container f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 to machine.slice:libpod:f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 DEBU[0001] reading hooks from /usr/share/containers/oci/hooks.d DEBU[0001] Created OCI spec for container f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 at /var/lib/containers/storage/overlay-containers/f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217/userdata/config.json DEBU[0001] /usr/libexec/podman/conmon messages will be logged to syslog DEBU[0001] running conmon: /usr/libexec/podman/conmon args="[--api-version 1 -s -c f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 -u f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217/userdata -p /var/run/containers/storage/overlay-containers/f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217/userdata/pidfile -l k8s-file:/var/lib/containers/storage/overlay-containers/f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog -t --conmon-pidfile /var/run/containers/storage/overlay-containers/f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217]" INFO[0001] Running conmon under slice machine.slice and unitName libpod-conmon-f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217.scope DEBU[0002] Received: 16856 INFO[0002] Got Conmon PID as 16844 DEBU[0002] Created container f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 in OCI runtime DEBU[0002] Attaching to container f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 DEBU[0002] connecting to socket /var/run/libpod/socket/f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217/attach DEBU[0002] Starting container f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 with command [/bin/sh] DEBU[0002] Received a resize event: {Width:260 Height:58} DEBU[0002] Started container f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 / # DEBU[0002] Enabling signal proxying ```
here are some log entries from /var/log/messages when starting the container ``` May 7 12:05:02 shell podman: 2020-05-07 12:05:02.551688071 +0000 UTC m=+0.201037333 container create f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 (image=docker.io/library/alpine:3.11, name=happy_sutherland) May 7 12:05:02 shell kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready May 7 12:05:02 shell kernel: IPv6: ADDRCONF(NETDEV_UP): vethf40dd2e6: link is not ready May 7 12:05:02 shell kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf40dd2e6: link becomes ready May 7 12:05:02 shell kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 7 12:05:02 shell kernel: cni-podman0: port 1(vethf40dd2e6) entered blocking state May 7 12:05:02 shell kernel: cni-podman0: port 1(vethf40dd2e6) entered disabled state May 7 12:05:02 shell kernel: device vethf40dd2e6 entered promiscuous mode May 7 12:05:02 shell kernel: cni-podman0: port 1(vethf40dd2e6) entered blocking state May 7 12:05:02 shell kernel: cni-podman0: port 1(vethf40dd2e6) entered forwarding state May 7 12:05:04 shell systemd: Started libpod-conmon-f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217.scope. May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409 : addr{sun_family=AF_UNIX, sun_path=/tmp/conmon-term.1463J0} May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409 : attach sock path: /var/run/libpod/socket/f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217/attach May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409 : addr{sun_family=AF_UNIX, sun_path=/var/run/libpod/socket/f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217/attach} May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409 : terminal_ctrl_fd: 13 May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409 : winsz read side: 15, winsz write side: 15 May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409 : about to accept from console_socket_fd: 9 May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409 : about to recvfd from connfd: 11 May 7 12:05:04 shell systemd: Started libcontainer container f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217. May 7 12:05:04 shell kernel: IN=eth0 OUT= MAC=52:54:00:e9:6b:f1:00:1f:ca:b2:ea:40:08:00 SRC=94.102.56.181 DST=93.189.2.51 LEN=40 TOS=0x00 PREC=0x00 TTL=247 ID=24778 PROTO=TCP SPT=58913 DPT=5157 WINDOW=1024 RES=0x00 SYN URGP=0 May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409 : console = {.name = '/dev/ptmxG?^ '; .fd = 9} May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409 : container PID: 16856 May 7 12:05:04 shell podman: 2020-05-07 12:05:04.753350082 +0000 UTC m=+2.402699389 container init f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 (image=docker.io/library/alpine:3.11, name=happy_sutherland) May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409 : Accepted connection 10 May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409 : Got ctl message: 1 58 260#012 on fd 13 May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409 : Message type: 1 May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409 : Got ctl message: 58 260#012 on fd 15 May 7 12:05:04 shell conmon: conmon f9cbdbede1eccf5eb409 : Height: 58, Width: 260 May 7 12:05:04 shell podman: 2020-05-07 12:05:04.781728951 +0000 UTC m=+2.431078254 container start f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 (image=docker.io/library/alpine:3.11, name=happy_sutherland) May 7 12:05:04 shell podman: 2020-05-07 12:05:04.782121696 +0000 UTC m=+2.431471037 container attach f9cbdbede1eccf5eb4092cc4afb8030f6786f4b614c0faada62ef8f0ba2bd217 (image=docker.io/library/alpine:3.11, name=happy_sutherland) ```
additionally, this is the output of running `ip monitor` that shows all network related changes ``` fe80::21f:caff:feb2:ea40 dev eth0 lladdr 00:1f:ca:b2:ea:40 router REACHABLE lladdr ba:04:ac:9b:88:1d PERMANENT 43: cni-podman0: mtu 1500 qdisc noop state DOWN group default link/ether ba:04:ac:9b:88:1d brd ff:ff:ff:ff:ff:ff 43: cni-podman0: mtu 1500 qdisc noqueue state UNKNOWN group default link/ether ba:04:ac:9b:88:1d brd ff:ff:ff:ff:ff:ff ff00::/8 dev cni-podman0 table local metric 256 pref medium fe80::/64 dev cni-podman0 proto kernel metric 256 pref medium nsid 0 (iproute2 netns name: cni-24d1f8fc-6e09-002a-59d9-c83225881d60) 44: vethf40dd2e6@if3: mtu 1500 qdisc noop state DOWN group default link/ether fe:b5:65:09:90:a5 brd ff:ff:ff:ff:ff:ff link-netnsid 0 44: vethf40dd2e6@if3: mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default link/ether fe:b5:65:09:90:a5 brd ff:ff:ff:ff:ff:ff link-netnsid 0 ff00::/8 dev vethf40dd2e6 table local metric 256 pref medium fe80::/64 dev vethf40dd2e6 proto kernel metric 256 pref medium 44: vethf40dd2e6@if3: mtu 1500 qdisc noqueue state UP group default link/ether fe:b5:65:09:90:a5 brd ff:ff:ff:ff:ff:ff link-netnsid 0 44: vethf40dd2e6@if3: mtu 1500 qdisc noqueue master cni-podman0 state UP group default link/ether fe:b5:65:09:90:a5 brd ff:ff:ff:ff:ff:ff link-netnsid 0 44: vethf40dd2e6@if3: mtu 1500 qdisc noqueue master cni-podman0 state UP group default link/ether fe:b5:65:09:90:a5 brd ff:ff:ff:ff:ff:ff link-netnsid 0 43: cni-podman0: mtu 1500 qdisc noqueue state UNKNOWN group default link/ether ba:04:ac:9b:88:1d brd ff:ff:ff:ff:ff:ff dev vethf40dd2e6 lladdr fe:b5:65:09:90:a5 PERMANENT dev vethf40dd2e6 lladdr fe:b5:65:09:90:a5 PERMANENT Deleted dev cni-podman0 lladdr ba:04:ac:9b:88:1d PERMANENT 44: vethf40dd2e6@eth1: mtu 1500 master cni-podman0 state UP link/ether fe:b5:65:09:90:a5 44: vethf40dd2e6@eth1: mtu 1500 master cni-podman0 state UP link/ether fe:b5:65:09:90:a5 44: vethf40dd2e6@eth1: mtu 1500 master cni-podman0 state UP link/ether fe:b5:65:09:90:a5 43: cni-podman0: mtu 1500 qdisc noqueue state UNKNOWN group default link/ether fe:b5:65:09:90:a5 brd ff:ff:ff:ff:ff:ff Deleted ff02::16 dev cni-podman0 lladdr 33:33:00:00:00:16 NOARP 44: vethf40dd2e6@eth1: mtu 1500 master cni-podman0 state UP link/ether fe:b5:65:09:90:a5 dev vethf40dd2e6 lladdr ae:f7:f6:50:38:fb REACHABLE 43: cni-podman0: mtu 1500 qdisc noqueue state UP group default link/ether fe:b5:65:09:90:a5 brd ff:ff:ff:ff:ff:ff 43: cni-podman0 inet 10.88.0.1/16 brd 10.88.255.255 scope global cni-podman0 valid_lft forever preferred_lft forever local 10.88.0.1 dev cni-podman0 table local proto kernel scope host src 10.88.0.1 broadcast 10.88.255.255 dev cni-podman0 table local proto kernel scope link src 10.88.0.1 10.88.0.0/16 dev cni-podman0 proto kernel scope link src 10.88.0.1 broadcast 10.88.0.0 dev cni-podman0 table local proto kernel scope link src 10.88.0.1 dev cni-podman0 lladdr ba:04:ac:9b:88:1d PERMANENT dev cni-podman0 lladdr ba:04:ac:9b:88:1d PERMANENT 43: cni-podman0: mtu 1500 qdisc noqueue state UP group default link/ether ba:04:ac:9b:88:1d brd ff:ff:ff:ff:ff:ff Deleted ff02::16 dev cni-podman0 lladdr 33:33:00:00:00:16 NOARP Deleted ff02::1:ff9b:881d dev cni-podman0 lladdr 33:33:ff:9b:88:1d NOARP ipv4 all forwarding on ipv4 default forwarding on ipv4 dev lo forwarding on ipv4 dev eth0 forwarding on ipv4 dev eth1 forwarding on ipv4 dev dns0 forwarding on ipv4 dev cni-podman0 forwarding on ipv4 dev vethf40dd2e6 forwarding on 2a03:8c00:1a:8::/64 dev cni-podman0 proto kernel metric 256 pref medium 43: cni-podman0 inet6 2a03:8c00:1a:8::1/64 scope global tentative valid_lft forever preferred_lft forever 43: cni-podman0: mtu 1500 qdisc noqueue state UP group default link/ether ba:04:ac:9b:88:1d brd ff:ff:ff:ff:ff:ff ipv6 dev lo forwarding on local 2a03:8c00:1a:8:: dev lo table local proto unspec metric 0 pref medium local fe80:: dev lo table local proto unspec metric 0 pref medium ipv6 dev eth0 forwarding on local fe80:: dev lo table local proto unspec metric 0 pref medium ipv6 dev eth1 forwarding on ipv6 dev cni-podman0 forwarding on ipv6 dev vethf40dd2e6 forwarding on ipv6 all forwarding on Deleted default via fe80::21f:caff:feb2:ea40 dev eth0 proto ra metric 1024 expires 1630sec hoplimit 64 pref medium 44: vethf40dd2e6 inet6 fe80::fcb5:65ff:fe09:90a5/64 scope link valid_lft forever preferred_lft forever local fe80::fcb5:65ff:fe09:90a5 dev lo table local proto unspec metric 0 pref medium local fe80:: dev lo table local proto unspec metric 0 pref medium 43: cni-podman0 inet6 fe80::b804:acff:fe9b:881d/64 scope link valid_lft forever preferred_lft forever local fe80::b804:acff:fe9b:881d dev lo table local proto unspec metric 0 pref medium local fe80:: dev lo table local proto unspec metric 0 pref medium 43: cni-podman0 inet6 2a03:8c00:1a:8::1/64 scope global valid_lft forever preferred_lft forever local 2a03:8c00:1a:8::1 dev lo table local proto unspec metric 0 pref medium local 2a03:8c00:1a:8:: dev lo table local proto unspec metric 0 pref medium ```
for the heck of it, the above three interlaced along with a flood ping (interval-time=10ms) at my gateway, to indicate exactly _when_ the ipv6 functionality on the host breaks ``` $ sudo tail -f /var/log/messages & sudo ip monitor & sudo ping6 -i 0.01 2a03:8c00:1a:6::1 & sleep 0.5 && sudo podman run --log-level=debug -d docker.io/library/alpine:3.11 && sudo pkill -9 ping6 [1] 20237 [2] 20238 [3] 20239 PING 2a03:8c00:1a:6::1(2a03:8c00:1a:6::1) 56 data bytes 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=1 ttl=64 time=0.675 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=2 ttl=64 time=0.359 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=3 ttl=64 time=0.448 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=4 ttl=64 time=0.517 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=5 ttl=64 time=0.877 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=6 ttl=64 time=0.434 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=7 ttl=64 time=0.465 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=8 ttl=64 time=0.515 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=9 ttl=64 time=0.640 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=10 ttl=64 time=0.411 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=11 ttl=64 time=2.59 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=12 ttl=64 time=0.416 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=13 ttl=64 time=0.405 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=14 ttl=64 time=0.966 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=15 ttl=64 time=0.540 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=16 ttl=64 time=0.607 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=17 ttl=64 time=0.532 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=18 ttl=64 time=0.544 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=19 ttl=64 time=0.510 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=20 ttl=64 time=0.538 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=21 ttl=64 time=0.576 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=22 ttl=64 time=0.560 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=23 ttl=64 time=0.547 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=24 ttl=64 time=0.546 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=25 ttl=64 time=0.447 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=26 ttl=64 time=0.493 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=27 ttl=64 time=0.553 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=28 ttl=64 time=0.520 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=29 ttl=64 time=0.647 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=30 ttl=64 time=0.681 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=31 ttl=64 time=0.650 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=32 ttl=64 time=0.620 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=33 ttl=64 time=0.641 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=34 ttl=64 time=0.582 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=35 ttl=64 time=0.498 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=36 ttl=64 time=0.634 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=37 ttl=64 time=0.630 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=38 ttl=64 time=0.651 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=39 ttl=64 time=0.966 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=40 ttl=64 time=0.551 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=41 ttl=64 time=0.438 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=42 ttl=64 time=0.423 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=43 ttl=64 time=0.498 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=44 ttl=64 time=0.407 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=45 ttl=64 time=0.577 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=46 ttl=64 time=0.517 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=47 ttl=64 time=0.494 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=48 ttl=64 time=0.489 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=49 ttl=64 time=0.432 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=50 ttl=64 time=0.377 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=51 ttl=64 time=0.459 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=52 ttl=64 time=0.599 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=53 ttl=64 time=0.842 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=54 ttl=64 time=0.512 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=55 ttl=64 time=0.557 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=56 ttl=64 time=0.523 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=57 ttl=64 time=1.94 ms DEBU[0000] Found deprecated file /usr/share/containers/libpod.conf, please remove. Use /etc/containers/containers.conf to override defaults. DEBU[0000] Reading configuration file "/usr/share/containers/libpod.conf" DEBU[0000] Reading configuration file "/etc/containers/containers.conf" DEBU[0000] Merged system config "/etc/containers/containers.conf": &{{[] [] container-default [] host [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] [] [nproc=32768:32768] [] [] [] false [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] false false false private k8s-file -1 bridge false 2048 private /usr/share/containers/seccomp.json 65536k private private 65536} {false systemd [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] ctrl-p,ctrl-q true /var/run/libpod/events/events.log journald [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime] kata-fc:[/usr/bin/kata-fc] kata-qemu:[/usr/bin/kata-qemu] kata-runtime:[/usr/bin/kata-runtime] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing [] [crun runc] [crun] {false false false true true true} false 3 /var/lib/containers/storage/libpod 10 /var/run/libpod /var/lib/containers/storage/volumes} {[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman /etc/cni/net.d/}} 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=58 ttl=64 time=0.637 ms DEBU[0000] Using conmon: "/usr/libexec/podman/conmon" DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db DEBU[0000] Using graph driver overlay DEBU[0000] Using graph root /var/lib/containers/storage DEBU[0000] Using run root /var/run/containers/storage DEBU[0000] Using static dir /var/lib/containers/storage/libpod DEBU[0000] Using tmp dir /var/run/libpod DEBU[0000] Using volume path /var/lib/containers/storage/volumes DEBU[0000] Set libpod namespace to "" DEBU[0000] [graphdriver] trying provided driver "overlay" DEBU[0000] cached value indicated that overlay is supported DEBU[0000] cached value indicated that metacopy is not being used DEBU[0000] cached value indicated that native-diff is usable DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false DEBU[0000] Initializing event backend journald WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument WARN[0000] Error initializing configured OCI runtime kata-runtime: no valid executable found for OCI runtime kata-runtime: invalid argument WARN[0000] Error initializing configured OCI runtime kata-qemu: no valid executable found for OCI runtime kata-qemu: invalid argument WARN[0000] Error initializing configured OCI runtime kata-fc: no valid executable found for OCI runtime kata-fc: invalid argument DEBU[0000] using runtime "/usr/bin/runc" 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=59 ttl=64 time=0.472 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=60 ttl=64 time=0.432 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=61 ttl=64 time=0.885 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=62 ttl=64 time=0.422 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=63 ttl=64 time=0.345 ms INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist WARN[0000] Default CNI network name podman is unchangeable DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]docker.io/library/alpine:3.11" DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" DEBU[0000] exporting opaque data as blob "sha256:f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=64 ttl=64 time=0.527 ms DEBU[0000] Using bridge netmode DEBU[0000] No hostname set; container's hostname will default to runtime default DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json" DEBU[0000] created OCI spec and options for new container 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=65 ttl=64 time=0.495 ms DEBU[0000] Allocated lock 3 for container b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev]@f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" DEBU[0000] exporting opaque data as blob "sha256:f70734b6a266dcb5f44c383274821207885b549b75c8e119404917a61335981a" DEBU[0000] created container "b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503" 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=66 ttl=64 time=1.75 ms DEBU[0000] container "b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503" has work directory "/var/lib/containers/storage/overlay-containers/b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503/userdata" DEBU[0000] container "b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503" has run directory "/var/run/containers/storage/overlay-containers/b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503/userdata" 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=67 ttl=64 time=0.420 ms DEBU[0000] New container created "b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503" DEBU[0000] container "b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503" has CgroupParent "machine.slice/libpod-b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503.scope" May 7 12:18:33 shell podman: 2020-05-07 12:18:33.648266278 +0000 UTC m=+0.184733739 container create b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 (image=docker.io/library/alpine:3.11, name=quirky_sammet) DEBU[0000] overlay: mount_data=nodev,lowerdir=/var/lib/containers/storage/overlay/l/NWJTW4RWZL2KWL2W3DRBEJAYS7,upperdir=/var/lib/containers/storage/overlay/e1c605a7219566bf771a1127d2ce2e24f501f0292f2ff692d097c16b91d0a0ad/diff,workdir=/var/lib/containers/storage/overlay/e1c605a7219566bf771a1127d2ce2e24f501f0292f2ff692d097c16b91d0a0ad/work DEBU[0000] mounted container "b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503" at "/var/lib/containers/storage/overlay/e1c605a7219566bf771a1127d2ce2e24f501f0292f2ff692d097c16b91d0a0ad/merged" DEBU[0000] Created root filesystem for container b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 at /var/lib/containers/storage/overlay/e1c605a7219566bf771a1127d2ce2e24f501f0292f2ff692d097c16b91d0a0ad/merged DEBU[0000] Made network namespace at /var/run/netns/cni-ca167143-3d9a-09da-3cf4-1107a0e37197 for container b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 INFO[0000] About to add CNI network lo (type=loopback) 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=68 ttl=64 time=0.481 ms INFO[0000] Got pod network &{Name:quirky_sammet Namespace:quirky_sammet ID:b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 NetNS:/var/run/netns/cni-ca167143-3d9a-09da-3cf4-1107a0e37197 Networks:[] RuntimeConfig:map[podman:{IP: MAC: PortMappings:[] Bandwidth: IpRanges:[]}]} INFO[0000] About to add CNI network podman (type=bridge) 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=69 ttl=64 time=0.414 ms lladdr 6e:eb:22:8b:d1:bc PERMANENT 47: cni-podman0: mtu 1500 qdisc noop state DOWN group default link/ether 6e:eb:22:8b:d1:bc brd ff:ff:ff:ff:ff:ff 47: cni-podman0: mtu 1500 qdisc noqueue state UNKNOWN group default link/ether 6e:eb:22:8b:d1:bc brd ff:ff:ff:ff:ff:ff ff00::/8 dev cni-podman0 table local metric 256 pref medium fe80::/64 dev cni-podman0 proto kernel metric 256 pref medium 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=70 ttl=64 time=0.393 ms May 7 12:18:33 shell kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=71 ttl=64 time=0.379 ms nsid 0 (iproute2 netns name: cni-ca167143-3d9a-09da-3cf4-1107a0e37197) 48: veth20db8f87@if3: mtu 1500 qdisc noop state DOWN group default link/ether f2:09:58:a5:64:5a brd ff:ff:ff:ff:ff:ff link-netnsid 0 48: veth20db8f87@if3: mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default link/ether f2:09:58:a5:64:5a brd ff:ff:ff:ff:ff:ff link-netnsid 0 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=72 ttl=64 time=0.464 ms ff00::/8 dev veth20db8f87 table local metric 256 pref medium fe80::/64 dev veth20db8f87 proto kernel metric 256 pref medium 48: veth20db8f87@if3: mtu 1500 qdisc noqueue state UP group default link/ether f2:09:58:a5:64:5a brd ff:ff:ff:ff:ff:ff link-netnsid 0 May 7 12:18:33 shell kernel: IPv6: ADDRCONF(NETDEV_UP): veth20db8f87: link is not ready May 7 12:18:33 shell kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth20db8f87: link becomes ready May 7 12:18:33 shell kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 7 12:18:33 shell kernel: cni-podman0: port 1(veth20db8f87) entered blocking state 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=73 ttl=64 time=0.498 ms 48: veth20db8f87@if3: mtu 1500 qdisc noqueue master cni-podman0 state UP group default link/ether f2:09:58:a5:64:5a brd ff:ff:ff:ff:ff:ff link-netnsid 0 May 7 12:18:33 shell kernel: cni-podman0: port 1(veth20db8f87) entered disabled state May 7 12:18:33 shell kernel: device veth20db8f87 entered promiscuous mode 48: veth20db8f87@if3: mtu 1500 qdisc noqueue master cni-podman0 state UP group default link/ether f2:09:58:a5:64:5a brd ff:ff:ff:ff:ff:ff link-netnsid 0 47: cni-podman0: mtu 1500 qdisc noqueue state UNKNOWN group default link/ether 6e:eb:22:8b:d1:bc brd ff:ff:ff:ff:ff:ff dev veth20db8f87 lladdr f2:09:58:a5:64:5a PERMANENT dev veth20db8f87 lladdr f2:09:58:a5:64:5a PERMANENT Deleted dev cni-podman0 lladdr 6e:eb:22:8b:d1:bc PERMANENT May 7 12:18:33 shell kernel: cni-podman0: port 1(veth20db8f87) entered blocking state May 7 12:18:33 shell kernel: cni-podman0: port 1(veth20db8f87) entered forwarding state 48: veth20db8f87@eth1: mtu 1500 master cni-podman0 state UP link/ether f2:09:58:a5:64:5a 48: veth20db8f87@eth1: mtu 1500 master cni-podman0 state UP link/ether f2:09:58:a5:64:5a 48: veth20db8f87@eth1: mtu 1500 master cni-podman0 state UP link/ether f2:09:58:a5:64:5a 47: cni-podman0: mtu 1500 qdisc noqueue state UNKNOWN group default link/ether f2:09:58:a5:64:5a brd ff:ff:ff:ff:ff:ff Deleted ff02::16 dev cni-podman0 lladdr 33:33:00:00:00:16 NOARP 48: veth20db8f87@eth1: mtu 1500 master cni-podman0 state UP link/ether f2:09:58:a5:64:5a 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=74 ttl=64 time=3.72 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=75 ttl=64 time=0.709 ms dev veth20db8f87 lladdr 56:74:ec:00:22:d9 REACHABLE 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=76 ttl=64 time=0.599 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=77 ttl=64 time=0.439 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=78 ttl=64 time=0.488 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=79 ttl=64 time=0.949 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=80 ttl=64 time=0.813 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=81 ttl=64 time=0.652 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=82 ttl=64 time=0.623 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=83 ttl=64 time=1.18 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=84 ttl=64 time=1.17 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=85 ttl=64 time=0.694 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=86 ttl=64 time=0.676 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=87 ttl=64 time=0.729 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=88 ttl=64 time=4.00 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=89 ttl=64 time=0.471 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=90 ttl=64 time=0.807 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=91 ttl=64 time=0.444 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=92 ttl=64 time=0.750 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=93 ttl=64 time=0.682 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=94 ttl=64 time=0.656 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=95 ttl=64 time=0.644 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=96 ttl=64 time=0.737 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=97 ttl=64 time=0.544 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=98 ttl=64 time=0.477 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=99 ttl=64 time=0.428 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=100 ttl=64 time=0.537 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=101 ttl=64 time=0.949 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=102 ttl=64 time=0.548 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=103 ttl=64 time=0.511 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=104 ttl=64 time=0.748 ms fe80::21f:caff:feb2:ea40 dev eth0 lladdr 00:1f:ca:b2:ea:40 router REACHABLE 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=105 ttl=64 time=0.511 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=106 ttl=64 time=2.33 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=107 ttl=64 time=0.583 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=108 ttl=64 time=0.535 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=109 ttl=64 time=0.521 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=110 ttl=64 time=0.514 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=111 ttl=64 time=0.491 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=112 ttl=64 time=0.483 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=113 ttl=64 time=0.579 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=114 ttl=64 time=0.654 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=115 ttl=64 time=0.617 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=116 ttl=64 time=0.507 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=117 ttl=64 time=0.569 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=118 ttl=64 time=0.539 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=119 ttl=64 time=1.02 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=120 ttl=64 time=0.632 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=121 ttl=64 time=0.585 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=122 ttl=64 time=0.538 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=123 ttl=64 time=0.642 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=124 ttl=64 time=0.615 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=125 ttl=64 time=0.506 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=126 ttl=64 time=0.514 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=127 ttl=64 time=0.510 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=128 ttl=64 time=0.483 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=129 ttl=64 time=0.574 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=130 ttl=64 time=0.731 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=131 ttl=64 time=0.610 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=132 ttl=64 time=0.712 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=133 ttl=64 time=0.686 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=134 ttl=64 time=0.695 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=135 ttl=64 time=0.677 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=136 ttl=64 time=0.559 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=137 ttl=64 time=0.524 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=138 ttl=64 time=0.486 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=139 ttl=64 time=0.511 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=140 ttl=64 time=0.509 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=141 ttl=64 time=0.907 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=142 ttl=64 time=0.579 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=143 ttl=64 time=0.618 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=144 ttl=64 time=0.435 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=145 ttl=64 time=0.383 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=146 ttl=64 time=0.383 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=147 ttl=64 time=0.345 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=148 ttl=64 time=0.314 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=149 ttl=64 time=0.362 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=150 ttl=64 time=0.929 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=151 ttl=64 time=0.435 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=152 ttl=64 time=0.424 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=153 ttl=64 time=0.353 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=154 ttl=64 time=0.352 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=155 ttl=64 time=0.387 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=156 ttl=64 time=0.334 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=157 ttl=64 time=0.366 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=158 ttl=64 time=0.384 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=159 ttl=64 time=0.373 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=160 ttl=64 time=0.439 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=161 ttl=64 time=0.436 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=162 ttl=64 time=0.416 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=163 ttl=64 time=0.620 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=164 ttl=64 time=4.30 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=165 ttl=64 time=0.503 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=166 ttl=64 time=0.515 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=167 ttl=64 time=0.487 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=168 ttl=64 time=0.422 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=169 ttl=64 time=0.384 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=170 ttl=64 time=0.380 ms 47: cni-podman0: mtu 1500 qdisc noqueue state UP group default link/ether f2:09:58:a5:64:5a brd ff:ff:ff:ff:ff:ff 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=171 ttl=64 time=0.435 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=172 ttl=64 time=0.382 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=173 ttl=64 time=0.766 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=174 ttl=64 time=0.721 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=175 ttl=64 time=0.700 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=176 ttl=64 time=0.629 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=177 ttl=64 time=0.867 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=178 ttl=64 time=0.738 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=179 ttl=64 time=0.698 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=180 ttl=64 time=0.672 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=181 ttl=64 time=0.638 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=182 ttl=64 time=0.680 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=183 ttl=64 time=0.706 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=184 ttl=64 time=0.550 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=185 ttl=64 time=0.641 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=186 ttl=64 time=0.631 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=187 ttl=64 time=0.527 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=188 ttl=64 time=1.04 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=189 ttl=64 time=0.421 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=190 ttl=64 time=0.436 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=191 ttl=64 time=0.365 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=192 ttl=64 time=0.536 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=193 ttl=64 time=0.427 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=194 ttl=64 time=0.555 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=195 ttl=64 time=0.608 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=196 ttl=64 time=0.494 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=197 ttl=64 time=0.533 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=198 ttl=64 time=0.514 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=199 ttl=64 time=0.502 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=200 ttl=64 time=0.541 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=201 ttl=64 time=0.653 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=202 ttl=64 time=0.548 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=203 ttl=64 time=0.492 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=204 ttl=64 time=0.585 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=205 ttl=64 time=1.40 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=206 ttl=64 time=0.504 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=207 ttl=64 time=0.419 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=208 ttl=64 time=0.450 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=209 ttl=64 time=0.442 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=210 ttl=64 time=0.711 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=211 ttl=64 time=0.481 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=212 ttl=64 time=0.502 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=213 ttl=64 time=0.554 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=214 ttl=64 time=0.424 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=215 ttl=64 time=0.625 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=216 ttl=64 time=0.460 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=217 ttl=64 time=0.975 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=218 ttl=64 time=0.617 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=219 ttl=64 time=0.619 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=220 ttl=64 time=0.732 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=221 ttl=64 time=0.582 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=222 ttl=64 time=0.651 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=223 ttl=64 time=0.729 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=224 ttl=64 time=0.461 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=225 ttl=64 time=0.734 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=226 ttl=64 time=0.698 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=227 ttl=64 time=0.586 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=228 ttl=64 time=0.644 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=229 ttl=64 time=0.672 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=230 ttl=64 time=0.655 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=231 ttl=64 time=1.19 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=232 ttl=64 time=0.650 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=233 ttl=64 time=0.669 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=234 ttl=64 time=0.626 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=235 ttl=64 time=0.679 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=236 ttl=64 time=0.539 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=237 ttl=64 time=0.477 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=238 ttl=64 time=0.462 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=239 ttl=64 time=0.565 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=240 ttl=64 time=0.688 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=241 ttl=64 time=0.593 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=242 ttl=64 time=0.615 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=243 ttl=64 time=0.632 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=244 ttl=64 time=0.718 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=245 ttl=64 time=0.657 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=246 ttl=64 time=0.647 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=247 ttl=64 time=0.630 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=248 ttl=64 time=0.931 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=249 ttl=64 time=0.584 ms 47: cni-podman0 inet6 fe80::6ceb:22ff:fe8b:d1bc/64 scope link valid_lft forever preferred_lft forever local fe80::6ceb:22ff:fe8b:d1bc dev lo table local proto unspec metric 0 pref medium 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=250 ttl=64 time=17.6 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=251 ttl=64 time=0.690 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=252 ttl=64 time=0.691 ms 48: veth20db8f87 inet6 fe80::f009:58ff:fea5:645a/64 scope link valid_lft forever preferred_lft forever local fe80::f009:58ff:fea5:645a dev lo table local proto unspec metric 0 pref medium 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=253 ttl=64 time=0.687 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=254 ttl=64 time=0.663 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=255 ttl=64 time=0.814 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=256 ttl=64 time=0.776 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=257 ttl=64 time=0.886 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=258 ttl=64 time=0.681 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=259 ttl=64 time=0.972 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=260 ttl=64 time=0.831 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=261 ttl=64 time=0.945 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=262 ttl=64 time=0.639 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=263 ttl=64 time=0.649 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=264 ttl=64 time=0.730 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=265 ttl=64 time=0.839 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=266 ttl=64 time=1.09 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=267 ttl=64 time=0.952 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=268 ttl=64 time=2.16 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=269 ttl=64 time=0.856 ms 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=270 ttl=64 time=0.626 ms 47: cni-podman0 inet 10.88.0.1/16 brd 10.88.255.255 scope global cni-podman0 valid_lft forever preferred_lft forever local 10.88.0.1 dev cni-podman0 table local proto kernel scope host src 10.88.0.1 broadcast 10.88.255.255 dev cni-podman0 table local proto kernel scope link src 10.88.0.1 10.88.0.0/16 dev cni-podman0 proto kernel scope link src 10.88.0.1 broadcast 10.88.0.0 dev cni-podman0 table local proto kernel scope link src 10.88.0.1 dev cni-podman0 lladdr 6e:eb:22:8b:d1:bc PERMANENT dev cni-podman0 lladdr 6e:eb:22:8b:d1:bc PERMANENT 47: cni-podman0: mtu 1500 qdisc noqueue state UP group default link/ether 6e:eb:22:8b:d1:bc brd ff:ff:ff:ff:ff:ff Deleted ff02::1:ff8b:d1bc dev cni-podman0 lladdr 33:33:ff:8b:d1:bc NOARP Deleted ff02::16 dev cni-podman0 lladdr 33:33:00:00:00:16 NOARP Deleted ff02::2 dev cni-podman0 lladdr 33:33:00:00:00:02 NOARP ipv4 all forwarding on ipv4 default forwarding on ipv4 dev lo forwarding on ipv4 dev eth0 forwarding on ipv4 dev eth1 forwarding on ipv4 dev dns0 forwarding on ipv4 dev cni-podman0 forwarding on ipv4 dev veth20db8f87 forwarding on 2a03:8c00:1a:8::/64 dev cni-podman0 proto kernel metric 256 pref medium 47: cni-podman0 inet6 2a03:8c00:1a:8::1/64 scope global tentative valid_lft forever preferred_lft forever 47: cni-podman0: mtu 1500 qdisc noqueue state UP group default link/ether 6e:eb:22:8b:d1:bc brd ff:ff:ff:ff:ff:ff ipv6 dev lo forwarding on local 2a03:8c00:1a:8:: dev lo table local proto unspec metric 0 pref medium local fe80:: dev lo table local proto unspec metric 0 pref medium ipv6 dev eth0 forwarding on local fe80:: dev lo table local proto unspec metric 0 pref medium ipv6 dev eth1 forwarding on local fe80:: dev lo table local proto unspec metric 0 pref medium ipv6 dev cni-podman0 forwarding on local fe80:: dev lo table local proto unspec metric 0 pref medium ipv6 dev veth20db8f87 forwarding on ipv6 all forwarding on Deleted default via fe80::21f:caff:feb2:ea40 dev eth0 proto ra metric 1024 expires 1751sec hoplimit 64 pref medium 64 bytes from 2a03:8c00:1a:6::1: icmp_seq=271 ttl=64 time=7.34 ms ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable DEBU[0002] [0] CNI result: &{0.4.0 [{Name:cni-podman0 Mac:6e:eb:22:8b:d1:bc Sandbox:} {Name:veth20db8f87 Mac:f2:09:58:a5:64:5a Sandbox:} {Name:eth0 Mac:56:74:ec:00:22:d9 Sandbox:/var/run/netns/cni-ca167143-3d9a-09da-3cf4-1107a0e37197}] [{Version:4 Interface:0xc0001eed18 Address:{IP:10.88.0.59 Mask:ffff0000} Gateway:10.88.0.1} {Version:6 Interface:0xc0001eedf0 Address:{IP:2a03:8c00:1a:8::108 Mask:ffffffffffffffff0000000000000000} Gateway:2a03:8c00:1a:8::1}] [{Dst:{IP::: Mask:00000000000000000000000000000000} GW:}] {[] [] []}} ping: sendmsg: Network is unreachable DEBU[0002] /etc/system-fips does not exist on host, not mounting FIPS mode secret DEBU[0002] Setting CGroups for container b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 to machine.slice:libpod:b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 DEBU[0002] reading hooks from /usr/share/containers/oci/hooks.d DEBU[0002] Created OCI spec for container b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 at /var/lib/containers/storage/overlay-containers/b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503/userdata/config.json DEBU[0002] /usr/libexec/podman/conmon messages will be logged to syslog DEBU[0002] running conmon: /usr/libexec/podman/conmon args="[--api-version 1 -s -c b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 -u b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503/userdata -p /var/run/containers/storage/overlay-containers/b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503/userdata/pidfile -l k8s-file:/var/lib/containers/storage/overlay-containers/b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog --conmon-pidfile /var/run/containers/storage/overlay-containers/b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503]" INFO[0002] Running conmon under slice machine.slice and unitName libpod-conmon-b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503.scope ping: sendmsg: Network is unreachable May 7 12:18:35 shell systemd: Started libpod-conmon-b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503.scope. ping: sendmsg: Network is unreachable May 7 12:18:35 shell conmon: conmon b21c566e466fe717a657 : attach sock path: /var/run/libpod/socket/b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503/attach May 7 12:18:35 shell conmon: conmon b21c566e466fe717a657 : addr{sun_family=AF_UNIX, sun_path=/var/run/libpod/socket/b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503/attach} May 7 12:18:35 shell conmon: conmon b21c566e466fe717a657 : terminal_ctrl_fd: 14 May 7 12:18:35 shell conmon: conmon b21c566e466fe717a657 : winsz read side: 16, winsz write side: 16 ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable May 7 12:18:35 shell systemd: Started libcontainer container b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503. ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable DEBU[0002] Received: 20389 INFO[0002] Got Conmon PID as 20377 DEBU[0002] Created container b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 in OCI runtime May 7 12:18:36 shell conmon: conmon b21c566e466fe717a657 : container PID: 20389 DEBU[0002] Starting container b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 with command [/bin/sh] May 7 12:18:36 shell podman: 2020-05-07 12:18:36.370532323 +0000 UTC m=+2.906999784 container init b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 (image=docker.io/library/alpine:3.11, name=quirky_sammet) ping: sendmsg: Network is unreachable DEBU[0002] Started container b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 May 7 12:18:36 shell podman: 2020-05-07 12:18:36.392082603 +0000 UTC m=+2.928550046 container start b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 (image=docker.io/library/alpine:3.11, name=quirky_sammet) ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable [3]+ Killed sudo ping6 -i 0.01 2a03:8c00:1a:6::1 May 7 12:18:36 shell podman: 2020-05-07 12:18:36.522226644 +0000 UTC m=+0.107800307 container died b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 (image=docker.io/library/alpine:3.11, name=quirky_sammet) 48: veth20db8f87@NONE: mtu 1500 qdisc noqueue master cni-podman0 state DOWN group default link/ether f2:09:58:a5:64:5a brd ff:ff:ff:ff:ff:ff Deleted ff02::1:ffa5:645a dev veth20db8f87 lladdr 33:33:ff:a5:64:5a NOARP Deleted ff02::16 dev veth20db8f87 lladdr 33:33:00:00:00:16 NOARP Deleted ff02::2 dev veth20db8f87 lladdr 33:33:00:00:00:02 NOARP Deleted fe80::/64 dev veth20db8f87 proto kernel metric 256 pref medium Deleted ff00::/8 dev veth20db8f87 table local metric 256 pref medium Deleted 48: veth20db8f87 inet6 fe80::f009:58ff:fea5:645a/64 scope link valid_lft forever preferred_lft forever Deleted local fe80:: dev lo table local proto unspec metric 0 pref medium Deleted local fe80::f009:58ff:fea5:645a dev lo table local proto unspec metric 0 pref medium 48: veth20db8f87@NONE: mtu 1500 master cni-podman0 state DOWN link/ether f2:09:58:a5:64:5a Deleted dev veth20db8f87 lladdr 56:74:ec:00:22:d9 REACHABLE 48: veth20db8f87@NONE: mtu 1500 master cni-podman0 state DOWN link/ether f2:09:58:a5:64:5a May 7 12:18:36 shell kernel: cni-podman0: port 1(veth20db8f87) entered disabled state 48: veth20db8f87@NONE: mtu 1500 master cni-podman0 state DOWN link/ether f2:09:58:a5:64:5a Deleted 48: veth20db8f87@NONE: mtu 1500 master cni-podman0 state DOWN link/ether f2:09:58:a5:64:5a May 7 12:18:36 shell kernel: device veth20db8f87 left promiscuous mode May 7 12:18:36 shell kernel: cni-podman0: port 1(veth20db8f87) entered disabled state Deleted dev if48 lladdr f2:09:58:a5:64:5a PERMANENT Deleted dev if48 lladdr f2:09:58:a5:64:5a PERMANENT 47: cni-podman0: mtu 1500 qdisc noqueue state UP group default link/ether 6e:eb:22:8b:d1:bc brd ff:ff:ff:ff:ff:ff Deleted 48: veth20db8f87@NONE: mtu 1500 qdisc noop state DOWN group default link/ether f2:09:58:a5:64:5a brd ff:ff:ff:ff:ff:ff 47: cni-podman0: mtu 1500 qdisc noqueue state DOWN group default link/ether 6e:eb:22:8b:d1:bc brd ff:ff:ff:ff:ff:ff Deleted nsid 0 (iproute2 netns name: cni-ca167143-3d9a-09da-3cf4-1107a0e37197) May 7 12:18:36 shell podman: 2020-05-07 12:18:36.750932148 +0000 UTC m=+0.336505730 container cleanup b21c566e466fe717a6579e788048e80c5418fad3e81264a276d7a2e9d8072503 (image=docker.io/library/alpine:3.11, name=quirky_sammet) 47: cni-podman0 inet6 2a03:8c00:1a:8::1/64 scope global valid_lft forever preferred_lft forever local 2a03:8c00:1a:8::1 dev lo table local proto unspec metric 0 pref medium local 2a03:8c00:1a:8:: dev lo table local proto unspec metric 0 pref medium $ sudo pkill -15 'ip' [2]+ Terminated sudo ip monitor $ sudo pkill -15 tail [1]+ Terminated sudo tail -f /var/log/messages ```

per the above, the part that look of interest to me are when the actual pinging starts failing:

[...]
ipv6 dev veth20db8f87 forwarding on
ipv6 all forwarding on
Deleted default via fe80::21f:caff:feb2:ea40 dev eth0 proto ra metric 1024 expires 1751sec hoplimit 64 pref medium
64 bytes from 2a03:8c00:1a:6::1: icmp_seq=271 ttl=64 time=7.34 ms
ping: sendmsg: Network is unreachable
[...]
also output of `podman version`: ``` Version: 1.9.0 RemoteAPI Version: 1 Go Version: go1.13.6 Git Commit: d3d78010e8fd8483456db2873b0c30937113dab1-dirty Built: Wed Apr 29 22:21:53 2020 OS/Arch: linux/amd64 ``` **note**: i am running podman 1.9 but patched with #6025 but it should not make any difference as _this_ issue being discussed now is not using rootless mode.
also output of `podman info --debug`: ``` debug: compiler: gc gitCommit: d3d78010e8fd8483456db2873b0c30937113dab1-dirty goVersion: go1.13.6 podmanVersion: 1.9.0 host: arch: amd64 buildahVersion: 1.14.8 cgroupVersion: v1 conmon: package: podman-1.9.0-1588198879.gited47046c.el7.x86_64 path: /usr/libexec/podman/conmon version: 'conmon version 2.0.7, commit: d3d78010e8fd8483456db2873b0c30937113dab1-dirty' cpus: 8 distribution: distribution: '"centos"' version: "7" eventLogger: journald hostname: shell idMappings: gidmap: - container_id: 0 host_id: 100 size: 1 - container_id: 1 host_id: 100000 size: 65536 uidmap: - container_id: 0 host_id: 1000 size: 1 - container_id: 1 host_id: 100000 size: 65536 kernel: 3.10.0-1127.el7.centos.plus.x86_64 memFree: 164622336 memTotal: 16655831040 ociRuntime: name: runc package: containerd.io-1.2.13-3.1.el7.x86_64 path: /usr/bin/runc version: |- runc version 1.0.0-rc10 commit: dc9208a3303feef5b3839f4323d9beb36df0a9dd spec: 1.0.1-dev os: linux rootless: true slirp4netns: executable: /usr/bin/slirp4netns package: slirp4netns-1.0.0-6.1.el7.x86_64 version: |- slirp4netns version 1.0.0 commit: a3be729152a33e692cd28b52f664defbf2e7810a libslirp: 4.2.0 swapFree: 0 swapTotal: 0 uptime: 65h 27m 4.83s (Approximately 2.71 days) registries: search: - registry.fedoraproject.org - registry.access.redhat.com - registry.centos.org - docker.io store: configFile: /home/cynikal/.config/containers/storage.conf containerStore: number: 7 paused: 0 running: 2 stopped: 5 graphDriverName: vfs graphOptions: {} graphRoot: /home/cynikal/.local/share/containers/storage graphStatus: {} imageStore: number: 8 runRoot: /run/user/1000 volumePath: /home/cynikal/.local/share/containers/storage/volumes ```

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.9.0-1588198879.gited47046c.el7.x86_64
containernetworking-plugins-0.8.5-145.2.el7.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):

This is a libvirtd/KVM guest running CentOS 7 (whose hypervisor is a physical rackmount host)

mheon commented 4 years ago

@mccv1r0 Mind taking a look? This one seems very much like a CNI issue.

mccv1r0 commented 4 years ago

Looking... (meetings all a.m.) I just tested using fedora 30 and all this works. I run this setup 24x7. An IPv6 client (nc -6) from quarantine location has native IPv6 support. It connects via public Internet to podman container on "podman" bridge running on linode VM, which provides a /48 for all my podman container. I use this all the time so I know it works in general.

Things breaking on the host interface shouldn't happen, CNI / Podman don't touch it. How good is IPv6 support in Centos &? Possibly a CentoOS 7 issue.

aleks-mariusz commented 4 years ago

at some point this happens (output of ip mon):

Deleted default via fe80::21f:caff:feb2:ea40 dev eth0 proto ra metric 1024 expires 1751sec hoplimit 64 pref medium

so something related to starting the container is triggering dropping the default route.. i don't think this can be attributed to the OS itself.. i'd say ipv6 support is pretty solid in CentOS 7.. it's been in the linux kernel even way before version 3.10, what CentOS 7.x standardized on (as that is heavily backported by RedHat with more modern kernel patches)..

i'm wide open on ideas how to diagnose what could be invoking this? it could be a simple case of my setup being broken, plain PEBKAC/user error..

@mccv1r0 that's pretty much the same setup i would like (except ultimately i'd like to do it as rootless).. would you mind sharing how your setup/configs differs from the default podman installation and what versions you're using?

mccv1r0 commented 4 years ago

Is that output from ip mon the containers eth0 or the hosts eth0? I'm guessing the host (which isn't related to podman) for several reasons.

You would have to be running ip mon inside the container(?) Not ruling it out, but doubtful. Correctly me if I'm wrong.

Your output above is from the host:

fe80::21f:caff:feb2:ea40 dev eth0 lladdr 00:1f:ca:b2:ea:40 router REACHABLE lladdr ba:04:ac:9b:88:1d PERMANENT 43: cni-podman0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default link/ether ba:04:ac:9b:88:1d brd ff:ff:ff:ff:ff:ff 43: cni-podman0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether ba:04:ac:9b:88:1d brd ff:ff:ff:ff:ff:ff [deleted]

Another hint is expires in the message from ip mon. Unless you are running e.g. rtdvd on this host which advertise IPv6 prefixes to containers on cni-podman0 bridge, the RA is from something upstream of the host eth0. IPv6, by default, will try to auto-configure interfaces, including the hosts. This is orthogonal to podman (though you may have turned IPv6 on because of using it with podman.)

I'm curious, does eth0 on the host have an IPv6 Address?

For things related to podman:

That's pretty much all podman sets up. Let's focus on that first.

aleks-mariusz commented 4 years ago

yes the ip mon output is from the host.. (i wouldn't see how i could have ran it inside the container if i'm setting the container up in the first place).

yes eth0 has several IPv6 address, link-local of course, as well as a routable GUA range i was assigned for ipv6 internet reachability..

and yes, i am giving the same range to the CNI in the 87-podman-bridg.conflist.. the container comes up with a GUA in that same range

after starting the container, yes the container can ping the host's cni-podman0 ipv6 address (ending in ::1) and the host can ping the cni generated IPv6 address of the container.. in that sense i guess the bridge config/veth attachment works as expected..

thing is, i'm just kind of taking a stab at using the host's GUA address but even when i tried to give it a ULA range, host ipv6 networking still broke

what address space are you use in your setup? is the /48 also shared by the host like in my setup? is it a GUA /48 or a ULA /48? how is routing happening, is it just you enabling one of the forwarding sysctl's (and doing SNAT or NPT in the case of non-GUA addresses)

mccv1r0 commented 4 years ago

after starting the container, yes the container can ping the host's cni-podman0 ipv6 address (ending in ::1) and the host can ping the cni generated IPv6 address of the container.. in that sense i guess the bridge config/veth attachment works as expected..

So it looks like podman is FAD

podman doesn't claim (@mheon correct me if I'm wrong) to automatically setup the host allow external systems to reach containers. For IPv4 port forwarding is used. I don't know if/when something like port forwarding is/will be supported for IPv6. We shouldn't need it, but... well, I'll spare you my soapbox.

IMHO what you are trying to do is the "Right Thing" (tm).

what address space are you use in your setup? is the /48 also shared by the host like in my setup? is it a GUA /48 or a ULA /48? how is routing happening, is it just you enabling one of the forwarding sysctl's (and doing SNAT or NPT in the case of non-GUA addresses)

I never NAT IPv6 (unless forced to, e.g. k8s)

I do both, obviously not at the same time. Some provides give you a routeable /48 or /56 for use with e.g. podman or docker or to delegate to your own LAN. Others just give you a /64 and you need to subnet it yourself. ULA works too, but never outside your e.g. campus or cluster.

For any relevant host interfaces i.e. eth0 (but there may be more), how are the IPv6 addresses obtained? e.g. where did 2a03:8c00:1a:8::/64 come from? You mentioned you were assigned a range, but it's not clear if that was a /64 or e.g. /48.

What is the IPv6 address/prefix for eth0 (assuming eth0 is the relevant interface). From above, I don't see any IPv6 address on eth0.

Do any of the interfaces receive IPv6 addresses via DHCPv6? Is anything upstream sending RA's? Any of these will get the kernel doing "stuff" depending on how your host is setup (explicitly or defaults.)

mheon commented 4 years ago

If CNI doesn't claim to configure the host for external v6 reachability of containers, then Podman presently doesn't make any such claims. I'd have to verify against Docker to see if we should be aiming to do so.

aleks-mariusz commented 4 years ago

this /64 was given to me by my provider (their router is on 2a03:8c00:1a:6::1), i can ping this fine and it looks like SLAAC is used to configure the 2a03:8c00:8::/64 address range that ended up on my eth0. i statically assigned those IPv6 settings in my /etc/sysconfig/network-scripts/ifcfg-eth0 file (along with IPv4 static address).

I'm not at the level of doing port forwarding, but in the IPv4 case, i can run a container and attach to it and instantly have internet connectivity (e.g. the 10.88.0.1 automatically routes to the rest of the internet, doing NAT).. However with IPv6, i can't even ping my eth0's IPv6 (slaac assigned) address.. and i have that forwarding sysctl for ipv6 set to 1..

it's as if the cni-podman0 bridge not actually connected to the host's IPv6 network or something, since i can't ping anything outside of that.. i am not sure if that's possible if the bridge is supposed the be purely layer-2..

as far as the default route being dropped, that's the most crucial thing i'd need to figure out what is causing that..

mccv1r0 commented 4 years ago

this /64 was given to me by my provider (their router is on 2a03:8c00:1a:6::1), i can ping this fine and it looks like SLAAC is used to configure the 2a03:8c00:8::/64 address range that ended up on my eth0. i statically assigned those IPv6 settings in my /etc/sysconfig/network-scripts/ifcfg-eth0 file (along with IPv4 static address).

Did your provider give you a static IPv6 address? If not, SLAAC will suffice, but... don't put that in your ifcfg-eth0 file. The address is SLAAutoConfigured. It can change each time. All (at least dynamic) IPv6 addresses have a lifetime.

Did the provider delegate another prefix to you for use on e.g. cni-podman0? Otherwise they will have no idea how to route to you.

Check your math. The provider supplied /64 bits 2a03:8c00:1a:6 you said that for eth0 you are using

"looks like SLAAC is used to configure" 2a03:8c00:8::/64

those are different 64 bit prefixes. SLAAC should only come up with the lower 64 bits from the 2a03:8c00:1a:6 prefix for the eth0 interface. The other prefix is either delegated to you (and your provider knows how to route to it) or belongs to someone else; so your provider will not route to you any packets destined to that prefix.

However with IPv6, i can't even ping my eth0's IPv6 (slaac assigned) address.. and i have that forwarding sysctl for ipv6 set to 1..

We might not be there yet, but you'll need to let ip6tables FORWARD packets between interfaces eventually. Setting the default policy to ACCEPT, for now, should ensure that the firewall isn't dropping.

$ sudo ip6tables -nvL FORWARD
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)

as far as the default route being dropped, that's the most crucial thing i'd need to figure out what is causing that..

Once we know each interface behaves, we'll worry about routes. From the above it looks like the interfaces don't have prefixes assigned properly.

aleks-mariusz commented 4 years ago

per @mheon:

If CNI doesn't claim to configure the host for external v6 reachability of containers, then Podman presently doesn't make any such claims.

this statement seem at odds with:

per @mccv1r0:

An IPv6 client (nc -6) from quarantine location has native IPv6 support. It connects via public Internet to podman container on "podman" bridge running on linode VM, which provides a /48 for all my podman container. I use this all the time so I know it works in general.

how can i do this too? :-) what does your config files/versions/etc look like?

mccv1r0 commented 4 years ago

how can i do this too? :-) what does your config files/versions/etc look like?

podman didn't do any of it. Nor did docker or lxc before that. Long before containers or VM's even my home Unix box just uses eth0 (to ISP) and ethX (to local networks) and yea, a podman network. This is just basic networking (L3 routing specifically.) There is nothing IPv6 specific, everything can be done with IPv4 as well if you have routable IPv4 prefixes. Most don't which is why we have that other plague NAT.

On some systems I manually configured Linux route IPv6 packets... on other I run routing daemons. When I added the second NIC, eth1 and plugged it into a L2 switch (which is analogous to brctl addbr XXX the host needed to be configured to route packets.

aleks-mariusz commented 4 years ago

Did your provider give you a static IPv6 address? If not, SLAAC will suffice, but... don't put that in your ifcfg-eth0 file.

Ok good point, i cleaned up my ifcfg-eth0 file (removed address specifics and let SLAAC do it's thing).. only thing in my ifcfg-eth0 related to IPv6 now (per this redhat blog post) is:

$ grep IPV6 ifcfg-eth0
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"

and ipv6 verified to work again without hard coding static IPv6 addresses/ranges.. this hasn't had any effect on my default route being dropped when i start a podman container however :-(

Did the provider delegate another prefix to you for use on e.g. cni-podman0? Otherwise they will have no idea how to route to you.

no i've only been given one /64 (i'm ~in the process of switching~ waiting on my provider to give me a /48 to have more flexibility, but that isn't relevant to this issue for now)

Check your math. The provider supplied /64 bits 2a03:8c00:1a:6 you said that for eth0 you are using

the default router i was told was at 2a03:8c00:1a:6::1, and that my range was as below

"looks like SLAAC is used to configure" 2a03:8c00:8::/64

those are different 64 bit prefixes. SLAAC should only come up with the lower 64 bits from the 2a03:8c00:1a:6 prefix for the eth0 interface. The other prefix is either delegated to you (and your provider knows how to route to it) or belongs to someone else; so your provider will not route to you any packets destined to that prefix.

I think the router is at 6::1 and i'm supposed to be on 8::/64, but i see what you'er saying and you're right, i don't really need to know the router 6::1 addr if i'm using SLAAC since that gave me the default router of the link-local address of the router interface anyway

However with IPv6, i can't even ping my eth0's IPv6 (slaac assigned) address.. and i have that forwarding sysctl for ipv6 set to 1..

We might not be there yet, but you'll need to let ip6tables FORWARD packets between interfaces eventually. Setting the default policy to ACCEPT, for now, should ensure that the firewall isn't dropping.

I've now also set ip6tables policy to ACCEPT on both the INPUT and FORWARD chains, and i still can't ping my eth0's GUA from within the container.. :-(

as far as the default route being dropped, that's the most crucial thing i'd need to figure out what is causing that..

Once we know each interface behaves, we'll worry about routes. From the above it looks like the interfaces don't have prefixes assigned properly.

the interface prefix assignments are now correct (hands off using SLAAC vs hardcoding static entries.. have a lot of IPv4 legacy thinking i need to undo seems)..

so what's happening to my default route :-(

aleks-mariusz commented 4 years ago

how can i do this too? :-) what does your config files/versions/etc look like?

podman didn't do any of it. Nor did docker or lxc before that. Long before containers or VM's even my home Unix box just uses eth0 (to ISP) and ethX (to local networks) and yea, a podman network.

i think i'm being misunderstood.. somehow you are able to accomplish this...

An IPv6 client (nc -6) from quarantine location has native IPv6 support. It connects via public Internet to podman container on "podman" bridge running on linode VM, which provides a /48 for all my podman container. I use this all the time so I know it works in general.

without any configuration file changes or custom settings? :-) that's what i'm asking for please

mccv1r0 commented 4 years ago

There are topology specific setting that need to take place. What works with SLAAC doesn't (necessarily) work with dhcpv6 and/or static or combination.

I'm not convinced your current prefixes are right. Regardless, if you used ULA for all your internal traffic, you should be able to get inside container to reach IPv6 address of eth0 (assuming firewall permits the traffic.

Assuming that the link to your provider is eth0 try adding these:

sudo sysctl -w net.ipv6.conf.eth0.accept_ra=2
sudo sysctl -w net.ipv6.conf.all.forwarding=1
sudo sysctl -w net.ipv6.conf.eth0.accept_ra_defrtr=1
sudo sysctl -w net.ipv6.conf.eth0.router_solicitations=1

You'll need accept_ra=2 so that the kernel doesn't mess with the routing table.

aleks-mariusz commented 4 years ago

of those sysctl's, net.ipv6.conf.eth0.accept_ra was set to 1, i set it to 2, and net.ipv6.conf.eth0.router_solicitations was set to 3, i set it to 1.. the middle two were already those values..

ipv6 on host still stops working if i start a podman container (and i can't ping anything beyond the cni-podman0 LL/GUA addr)..

curiously, if i manually re-add the default route, the host ipv6 starts working, so it's definitely that which is causing ipv6 to (what i have been calling) "stop working" on my host.. however, when i manually add back the route that was deleted, then starting the container does not affect ipv6 on the host afterwards.. not sure what to make of that.. is there something wonky with my config in the (reset) "clean slate" that causes the default route to be dropped initially (but not re-dropped after i manually add it back in)?

aleks-mariusz commented 4 years ago

Regardless, if you used ULA for all your internal traffic you should be able to get inside container to reach IPv6 address of eth0 (assuming firewall permits the traffic.

negative on the firewall being the problem :-( this is my failed attempt from pinging from container to GUA address that is on eth0 ``` $ sudo ip6tables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination CNI-FORWARD all anywhere anywhere /* CNI firewall plugin rules */ Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain CNI-ADMIN (1 references) target prot opt source destination Chain CNI-FORWARD (1 references) target prot opt source destination CNI-ADMIN all anywhere anywhere /* CNI firewall plugin rules */ ACCEPT all anywhere 2a03:8c00:1a:8::116 ctstate RELATED,ESTABLISHED ACCEPT all 2a03:8c00:1a:8::116 anywhere $ ip -6 a sh dev eth0 2: eth0: mtu 1500 state UP qlen 1000 inet6 2a03:8c00:1a:8:5054:ff:fee9:6bf1/64 scope global mngtmpaddr dynamic valid_lft 2591627sec preferred_lft 604427sec inet6 fe80::5054:ff:fee9:6bf1/64 scope link valid_lft forever preferred_lft forever $ sudo podman run -it --rm docker.io/library/alpine:3.11 / # ping 2a03:8c00:1a:8:5054:ff:fee9:6bf1 PING 2a03:8c00:1a:8:5054:ff:fee9:6bf1 (2a03:8c00:1a:8:5054:ff:fee9:6bf1): 56 data bytes ^C --- 2a03:8c00:1a:8:5054:ff:fee9:6bf1 ping statistics --- 42 packets transmitted, 0 packets received, 100% packet loss ```

and i've tried different ranges inside the 87-podman-bridge.conflist including LL as well as ULA ranges.. any mention of IPv6 inside that file regardless of address types being used have caused IPv6 to stop working :-( it's only now i see that re-adding the default route manually afterwards keeps it from being deleted again..

mccv1r0 commented 4 years ago

curiously, if i manually re-add the default route,

If your provider is sending RA's, the kernel should detects the RA and add the default route for you.

$ ip -6 route show 
::1 dev lo proto kernel metric 256 pref medium
[deleted]
fe80::/64 dev xxx proto kernel metric 101 pref medium
fe80::/64 dev eth0 proto kernel metric 102 pref medium
fe80::/64 dev vnet0 proto kernel metric 256 pref medium
default via fe80::1a8b:9dff:fed4:822 dev eth0 proto ra metric 1024 expires 1797sec hoplimit 64 pref medium
$ sudo ip -6 route delete default
$ ip -6 route show 
::1 dev lo proto kernel metric 256 pref medium
[deleted]
fe80::/64 dev xxx proto kernel metric 101 pref medium
fe80::/64 dev eth0 proto kernel metric 102 pref medium
fe80::/64 dev vnet0 proto kernel metric 256 pref medium
[mcc@wan2 ~]$ ip -6 route show 
::1 dev lo proto kernel metric 256 pref medium
[deleted]
fe80::/64 dev xxx proto kernel metric 101 pref medium
fe80::/64 dev eth0 proto kernel metric 102 pref medium
fe80::/64 dev vnet0 proto kernel metric 256 pref medium
default via fe80::1a8b:9dff:fed4:822 dev eth0 proto ra metric 1024 expires 1798sec hoplimit 64 pref medium
[mcc@wan2 ~]$ 

If I didn't manually delete it, the RA received by eth0 would have refreshed the timeout of the existing entry.

AFAICT there are things not quite right about your setup on eth0 or the network it attaches to? Are you receiving Route Advertisements? Make sure your firewall isn't blocking them.

This has nothing to do with podman, docker, libvirt. None of them should be touching the host interfaces or the routes. CNI enters the network namespace of the container and runs commands in that namespace. The IPv6 config in 87-podman-bridge.conflist should only be setting the default route inside each container started, and it should be set the default route nexthop to the IPv6 address of (in your case) cni-podman0.

aleks-mariusz commented 4 years ago

i've confirmed my firewall is wide-open, and i'm receiving RA's:

$ sudo tcpdump -vXni eth0 icmp6 and ip6[40] == 134
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
08:35:51.960578 IP6 (class 0xe0, hlim 255, next-header ICMPv6 (58) payload length: 64) fe80::21f:caff:feb2:ea40 > ff02::1: [icmp6 sum ok] ICMP6, router advertisement, length 64
        hop limit 64, Flags [none], pref medium, router lifetime 1800s, reachable time 0ms, retrans time 0ms
          source link-address option (1), length 8 (1): 00:1f:ca:b2:ea:40
          mtu option (5), length 8 (1):  1500
          prefix info option (3), length 32 (4): 2a03:8c00:1a:8::/64, Flags [onlink, auto], valid time 2592000s, pref. time 604800s
        0x0000:  6e00 0000 0040 3aff fe80 0000 0000 0000  n....@:.........
        0x0010:  021f caff feb2 ea40 ff02 0000 0000 0000  .......@........
        0x0020:  0000 0000 0000 0001 8600 fc59 4000 0708  ...........Y@...
        0x0030:  0000 0000 0000 0000 0101 001f cab2 ea40  ...............@
        0x0040:  0501 0000 0000 05dc 0304 40c0 0027 8d00  ..........@..'..
        0x0050:  0009 3a80 0000 0000 2a03 8c00 001a 0008  ..:.....*.......
        0x0060:  0000 0000 0000 0000                      ........
^C
1 packet captured
8 packets received by filter
0 packets dropped by kernel

i've set up a wrapper script for host-local plugin and ran it through strace, nothing unusual (at least nothing that would lead me to see why we're dropping the default ipv6 route).. it's strange because for some reason with all the redirections i'm doing to capture stdout and stderr in the wrapper script, the host-local process does not end up exitting (even tho i see several exit's in the strace output).. ping continue to work..

~strangely these assignments are not including the default IPv6 route~ my default route was not a part of this as i've temporarily removed them ``` $ cat .host-local-out.19837 { "cniVersion": "0.4.0", "ips": [ { "version": "4", "address": "10.88.0.76/16", "gateway": "10.88.0.1" }, { "version": "6", "address": "2a03:8c00:1a:8::119/64", "gateway": "2a03:8c00:1a:8::1" } ], "routes": [ { "dst": "0.0.0.0/0" } ], "dns": {} } ```
it's only after i kill -9 the strace'd host-local process the default route drops.. so i'm leaning to say whatever is triggering the route to be deleted is happening AFTER host-local outputs its json with the assignments.. --- in any case, to rule out this being a host or network specific issue, i've also setup another Host (also KVM) at home (so entirely different network), same package versions same OS, same config for 87-podman-bridge.conflist.. also confirmed RA's are coming in (no firewall set up there either).. and same IPv6 default route dropping :-/
mccv1r0 commented 4 years ago

There are two default routes. At this point I don't know which one we're talking abut.

ip mon earlier (and iirc our discussion re RA) were about the default route in the host network namespace that should egress out eth0 on the host to your provider. Since you're using SLACC this route should should be set by the kernel when RA is received. I've shown above that even if something deletes this, the kernel will add it back when the next RA is received.

The output re host-local only pertains to the podman container's network namespace. The two default routes are different from each other. Whatever the host-route .host-local-out.19837 is, it's just json.

Does 87-podman-bridge.conflist set "isGateway": true ? You don't show the entire file so I can't check myself.

Your conflist doesn't set "gw":"xxx:xxx:xxx:xxx::x", just"dst":"::/0". ifisGateway` is set you don't have to, it will be done for you (at least as of):

$ podman version 
Version:            1.8.0

What does ip -6 route show say inside the container? Get the output from ip addr show inside the container too.

aleks-mariusz commented 4 years ago

There are two default routes. At this point I don't know which one we're talking abut.

ip mon earlier (and iirc our discussion re RA) were about the default route in the host network namespace that should egress out eth0 on the host to your provider.

Apologies, you're spot on tho as usually when i refer to the default route it's almost always from the host perspective, the IPv6 one that's getting deleted on the host, when i launch a podman container.. The default route within the container i've never actually checked yet (have done so below), because the host connectivity is the main focus as this has higher outage potential.

I've shown above that even if something deletes this, the kernel will add it back when the next RA is received.

this hasn't been my experience.. ``` 64 bytes from lhr25s10-in-x0e.1e100.net (2a00:1450:4009:809::200e): icmp_seq=489 ttl=57 time=1.19 ms 64 bytes from lhr25s10-in-x0e.1e100.net (2a00:1450:4009:809::200e): icmp_seq=490 ttl=57 time=1.10 ms 64 bytes from lhr25s10-in-x0e.1e100.net (2a00:1450:4009:809::200e): icmp_seq=491 ttl=57 time=1.12 ms 64 bytes from lhr25s10-in-x0e.1e100.net (2a00:1450:4009:809::200e): icmp_seq=492 ttl=57 time=1.24 ms 64 bytes from lhr25s10-in-x0e.1e100.net (2a00:1450:4009:809::200e): icmp_seq=493 ttl=57 time=1.18 ms ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ^C --- ipv6.google.com ping statistics --- 499 packets transmitted, 493 received, 1% packet loss, time 498677ms rtt min/avg/max/mdev = 0.900/1.157/8.054/0.324 ms $ sudo tcpdump -vXni eth0 icmp6 and ip6[40] == 134 tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 16:52:42.357785 IP6 (class 0xe0, hlim 255, next-header ICMPv6 (58) payload length: 64) fe80::21f:caff:feb2:ea40 > ff02::1: [icmp6 sum ok] ICMP6, router advertisement, length 64 hop limit 64, Flags [none], pref medium, router lifetime 1800s, reachable time 0ms, retrans time 0ms source link-address option (1), length 8 (1): 00:1f:ca:b2:ea:40 mtu option (5), length 8 (1): 1500 prefix info option (3), length 32 (4): 2a03:8c00:1a:8::/64, Flags [onlink, auto], valid time 2592000s, pref. time 604800s 0x0000: 6e00 0000 0040 3aff fe80 0000 0000 0000 n....@:......... 0x0010: 021f caff feb2 ea40 ff02 0000 0000 0000 .......@........ 0x0020: 0000 0000 0000 0001 8600 fc59 4000 0708 ...........Y@... 0x0030: 0000 0000 0000 0000 0101 001f cab2 ea40 ...............@ 0x0040: 0501 0000 0000 05dc 0304 40c0 0027 8d00 ..........@..'.. 0x0050: 0009 3a80 0000 0000 2a03 8c00 001a 0008 ..:.....*....... 0x0060: 0000 0000 0000 0000 ........ ^C 1 packet captured 1 packet received by filter 0 packets dropped by kernel $ ping6 ipv6.google.com connect: Network is unreachable ```

as you can see above, after my default route somehow gets deleted, the default route is not added back automatically, even though my firewall is wide open AND i confirmed they are coming in via tcpdump, and i've ensured the sysctl's are set as mentioned..

The output re host-local only pertains to the podman container's network namespace. The two default routes are different from each other. Whatever the host-route .host-local-out.19837 is, it's just json.

i see, this is all overall a big educational opportunity to learn more about the way CNI functions and its components..

Does 87-podman-bridge.conflist set "isGateway": true ?

it does have this..

here it is in full

      1 {
      2   "cniVersion": "0.4.0",
      3   "name": "podman",
      4   "plugins": [
      5     {
      6       "type": "bridge",
      7       "bridge": "cni-podman0",
      8       "isGateway": true,
      9       "ipMasq": true,
     10       "ipam": {
     11         "type": "host-local",
     12         "routes": [
     13           { "dst": "0.0.0.0/0" },
     14           { "dst": "::/0" }
     15         ],
     16         "ranges": [
     17           [
     18             {
     19               "subnet": "10.88.0.0/16",
     20               "gateway": "10.88.0.1"
     21             }
     22           ],
     23           [
     24             {
     25               "subnet": "fd03:8c00:1a:8::/64",
     26               "rangeStart": "fd03:8c00:1a:8::100",
     27               "rangeEnd": "fd03:8c00:1a:8::200"
     28             }
     29           ]
     30         ]
     31       }
     32     },
     33     {
     34       "type": "portmap",
     35       "capabilities": {
     36         "portMappings": true
     37       }
     38     },
     39     {
     40       "type": "firewall"
     41     },
     42     {
     43       "type": "tuning"
     44     }
     45   ]
     46 }

(also here i've tried to set the network range to a ULA (simply with s/2a03/fd03/g)

You don't show the entire file so I can't check myself.

sorry i haven't included it in full earlier (i included a diff from what is distributed with the podman package), but is also why i was hoping to see what someone's looks like who has this working, in case there was something glaringly wrong with mine.. i haven't found any official example of what this file should look like other than the host-local plugin github page but that one does not talk about how the address space being used relates to how the host is set up (in case it makes a difference?)

and as such, really as i'm not super familiar with the intricacies of all the CNI does, what i've been forced to do without a canonical reference, is essentially throwing a bunch of poo at the walls and trying to see what sticks as i try different things and test different theories.. I would also have just blamed the host and the network too, except as mentioned in my last update, i've gotten the same behaviour reproduced on a new blank VM with same versions, same 87-podman-bridge.conflist, everything same EXCEPT the network (at home this time instead of my colo box)..

Your conflist doesn't set "gw":"xxx:xxx:xxx:xxx::x", just"dst":"::/0". ifisGateway` is set you don't have to, it will be done for you (at least as of):

thanks for that clarification

$ podman version 
Version:            1.8.0
$ podman version
Version:            1.9.0

What does ip -6 route show say inside the container?

/ # ip -6 route show
fd03:8c00:1a:8::/64 dev eth0  metric 256 
fe80::/64 dev eth0  metric 256 
default via fd03:8c00:1a:8::1 dev eth0  metric 1024 
unreachable default dev lo  metric -1  error -101
ff00::/8 dev eth0  metric 256 
unreachable default dev lo  metric -1  error -101

Get the output from ip addr show inside the container too.

/ # ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if90: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 0e:7d:9e:cd:1d:f1 brd ff:ff:ff:ff:ff:ff
    inet 10.88.0.88/16 brd 10.88.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd03:8c00:1a:8::101/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::c7d:9eff:fecd:1df1/64 scope link 
       valid_lft forever preferred_lft forever

as you can imagine, i'm pretty much at the end of my rope and out of ideas..

dsbaha commented 4 years ago

Hi, I just wanted to quickly chime in and say a few things. First, IPv6 into my containers works without a problem! Please see my current cni config;

{
  "cniVersion": "0.4.0",
  "name": "podman",
  "plugins": [
    {
      "type": "bridge",
      "bridge": "cni-podman0",
      "isGateway": true,
      "ipMasq": false,
      "ipam": {
        "type": "host-local",
        "routes": [{ "dst": "0.0.0.0/0" }, {"dst": "2000::/3" }],
        "ranges": [
          [
            {
              "subnet": "10.88.0.0/16",
              "gateway": "10.88.0.1"
            }
          ],
          [
            {
              "subnet": "2601:601:9f80:3c4f::/64",
              "gateway": "2601:601:9f80:3c4f::1"
            }
          ]
        ]
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    },
    {
      "type": "firewall"
    },
    {
      "type": "tuning"
    }
  ]
}

Please be aware you have to configure ip6tables (or whatever your OS firewall is) for forwarding.

Second issue is that I can't statically set an IPv6 address with the --ip option. If I remove the following check, and re-compile, statically setting the IPv6 address works great! That check is;

https://github.com/containers/libpod/blob/v1.9/pkg/spec/namespaces.go#L85-L87

else if ip.To4() == nil {
            return nil, errors.Wrapf(define.ErrInvalidArg, "%s is not an IPv4 address", c.IPAddress)
        }

Then I can run the following command and get this output;

./podman run -ti --rm --ip 2601:601:9f80:3c4f::2 alpine /bin/sh
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:93:42:B6:89:30
          inet addr:10.88.0.12  Bcast:10.88.255.255  Mask:255.255.0.0
          inet6 addr: 2601:601:9f80:3c4f::2/64 Scope:Global
          inet6 addr: fe80::93:42ff:feb6:8930/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:816 (816.0 B)  TX bytes:814 (814.0 B)

I'm currently using Fedora CoreOS 31

$ rpm -qa podman
podman-1.9.2-1.fc31.x86_64
$ rpm -qa containernetworking-plugins
containernetworking-plugins-0.8.6-1.fc31.x86_64
mheon commented 4 years ago

There should be a dedicated flag for static IPv6 addresses (--ip6) but we haven't wired it in yet. It's actually a simple change - I'll see about getting it landed in master tomorrow.

aleks-mariusz commented 4 years ago

Tried @dsbaha 's cni config on my test centos 7 host (config with my ipv6 address here), still dropping default network on the host as soon as the container is started :-(

hasn't ANYONE got a centos 7 host, with ipv6 connectivity that could try to get this working??

github-actions[bot] commented 4 years ago

A friendly reminder that this issue had no activity for 30 days.

rhatdan commented 4 years ago

@mheon What is the scoop on this one?

sumit0900 commented 4 years ago

configure ip6tables

Is there any specific settings need to configured in ip6tables. Because when i tried to remove container having ipv6 ip it is giving bellow error -

ERRO[0000] Error deleting network: running [/sbin/ip6tables -t nat -D POSTROUTING -s fd00::1:8:a/112 -j CNI-355124625f5423fd129aa828 -m comment --comment name: "demo" id: "23f508256866835fcedffb2fdd0f1436f3a47e5e8c99115004a8034b68fa62df" --wait]: exit status 1: iptables: Bad rule (does a matching rule exist in that chain?). ERRO[0000] Error while removing pod from CNI network "demo": running [/sbin/ip6tables -t nat -D POSTROUTING -s fd00::1:8:a/112 -j CNI-355124625f5423fd129aa828 -m comment --comment name: "demo" id: "23f508256866835fcedffb2fdd0f1436f3a47e5e8c99115004a8034b68fa62df" --wait]: exit status 1: iptables: Bad rule (does a matching rule exist in that chain?). ERRO[0000] unable to cleanup network for container 23f508256866835fcedffb2fdd0f1436f3a47e5e8c99115004a8034b68fa62df: "error tearing down CNI namespace configuration for container 23f508256866835fcedffb2fdd0f1436f3a47e5e8c99115004a8034b68fa62df: running [/sbin/ip6tables -t nat -D POSTROUTING -s fd00::1:8:a/112 -j CNI-355124625f5423fd129aa828 -m comment --comment name: \"demo\" id: \"23f508256866835fcedffb2fdd0f1436f3a47e5e8c99115004a8034b68fa62df\" --wait]: exit status 1: iptables: Bad rule (does a matching rule exist in that chain?).\n" 23f508256866835fcedffb2fdd0f1436f3a47e5e8c99115004a8034b68fa62df.

Please suggest

rhatdan commented 4 years ago

Looks like @apoos-maximus is going to work on part of this

rhatdan commented 3 years ago

@mheon any progress on our IPV6 support?

mheon commented 3 years ago

--ipv6 has not landed yet

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

ricardo-rod commented 3 years ago

To humans and systems testing in IPv6 only networks in podman, in fedora33 current box the solution of @dsbaha works for IPv6 and portMappings with root containers and the host IPv6 network does not get black-holed.

@mheon the --ipv6 flag maybe has been forgotten? I will learn go language in order to contribute to this project.

But ipv6 for rootless is another problem and IPv6 is not working in rootless, maybe NAT issues or the tup/tap adapter.

mheon commented 3 years ago

@ricardo-rod I'm working on --ipv6 now, actually. Unfortunately, it's turning out to be a much larger task than I was hoping for - to support proper dual-stack solutions, we need to rewrite some parts of the library we use for calling the CNI network stack so we can support static v4 and v6 addresses simultaneously.

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

rhatdan commented 3 years ago

@Luap99 @mheon Does this require the network redesign?

mheon commented 3 years ago

Yes

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

MartinX3 commented 3 years ago

@rhatdan Could you remove the stale label?

rhatdan commented 3 years ago

Done.

superjeng1 commented 3 years ago

Hi, I had a lot of headache making IPv6 happen with Podman. I started with rootless and gave up, thinking rootful would be quick and simple. But I was wrong. I went into this problem, the IPv6 connectivity of the host machine just broke as the container starts. I can publish the port in IPv6 alright, and connect to the published port from the host alright, but without network access, clients just can't connect. Are there any workarounds right now? Or do I have to ditch it and search for other solutions? Thanks!

github-actions[bot] commented 2 years ago

A friendly reminder that this issue had no activity for 30 days.

MartinX3 commented 2 years ago

@rhatdan Could you remove the stale label?

github-actions[bot] commented 2 years ago

A friendly reminder that this issue had no activity for 30 days.

MartinX3 commented 2 years ago

@rhatdan Could you remove the stale label?

rhatdan commented 2 years ago

@MartinX3 you just commenting on it seems to have removed the stale label...:^)

MartinX3 commented 2 years ago

@rhatdan thank you for enhancing the bot :D

rhatdan commented 2 years ago

I would love to take credit for that, but someone else did it, we just take advantage of it.

m3nu commented 2 years ago

How is the state of things here? I'm using CentOS Stream 9 with rootless containers. With IPv6 enabled on the host, containers can't use it. So not "out of the box" yet 😄