polycube-network / polycube

eBPF/XDP-based software framework for fast network services running in the Linux kernel.
Apache License 2.0
508 stars 102 forks source link

[BUG] Interfaces show DOWN #202

Closed havok4u closed 5 years ago

havok4u commented 5 years ago

Describe the bug

When adding interfaces to polycube, they show down and connectivity doesn't work

To Reproduce

sudo ip netns add myapp0 sudo ip netns add myapp1

Create interfaces and up the links

sudo ip link add veth1 type veth peer name veth2 sudo ip link add veth3 type veth peer name veth4 for i in 1 2 3 4 do sudo ip link set veth$i up done sudo ip link set veth2 netns myapp0 sudo ip link set veth4 netns myapp1

Add to Polycube

polycubectl br0 ports add veth1 polycubectl br0 ports add veth3

Set IP address to namespaces

sudo ip netns exec myapp0 ip addr add 10.1.1.2/24 dev veth2 sudo ip netns exec myapp1 ip addr add 10.1.1.3/24 dev veth4

Up interfaces in namespaces if required

sudo ip netns exec myapp0 sudo ip link set veth2 up sudo ip netns exec myapp0 ip addr show 1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 14: veth2@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 2e:a9:da:40:ec:af brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.1.1.2/24 scope global veth2 valid_lft forever preferred_lft forever inet6 fe80::2ca9:daff:fe40:ecaf/64 scope link valid_lft forever preferred_lft forever sudo ip netns exec myapp1 sudo ip link set veth4 up sudo ip netns exec myapp1 ip addr show 1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 16: veth4@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 3a:49:95:d0:5a:bf brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.1.1.3/24 scope global veth4 valid_lft forever preferred_lft forever inet6 fe80::3849:95ff:fed0:5abf/64 scope link valid_lft forever preferred_lft forever

Verify veth1 and veth3

ip link show veth1 15: veth1@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 86:83:02:57:b4:1b brd ff:ff:ff:ff:ff:ff link-netnsid 0

ip link show veth3 17: veth3@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether e2:75:0c:f1:47:1c brd ff:ff:ff:ff:ff:ff link-netnsid 1

Ping

sudo ip netns exec myapp1 ping 10.1.1.2 -c 5 PING 10.1.1.2 (10.1.1.2) 56(84) bytes of data.

--- 10.1.1.2 ping statistics --- 5 packets transmitted, 0 received, 100% packet loss, time 4100ms

Show Bridge

polycubectl br0 show name: br0 uuid: 91081d7c-b164-4487-a161-3876467d338d service-name: bridge type: TC loglevel: INFO shadow: false span: false stp-enabled: false mac: f2:f6:ae:20:3a:dc fdb: aging-time: 300

ports: name uuid status peer mac mode veth1 14720cdf-8ae2-45aa-8dad-8420e1432943 DOWN ae:8f:a6:15:f3:a3 access veth3 ef0e0094-12aa-4fdd-96be-09326317f2d0 DOWN 36:18:34:87:f3:97 access

mauriciovasquezbernal commented 5 years ago

Ì think you are missing to set the peer.

polycubectl br0 ports add veth1 peer=veth1
polycubectl br0 ports add veth3 peer=veth3
havok4u commented 5 years ago

I guess I would be curious to know why would you do that when the port is already added? Regardless, it did not work.

$ polycubectl br0 ports add veth1 peer=veth1 Port veth1 already exists $ polycubectl br0 ports add veth3 peer=veth3 Port veth3 already exists

$ polycubectl br0 show name: br0 uuid: 91081d7c-b164-4487-a161-3876467d338d service-name: bridge type: TC loglevel: INFO shadow: false span: false stp-enabled: false mac: f2:f6:ae:20:3a:dc fdb: aging-time: 300

ports: name uuid status peer mac mode veth1 14720cdf-8ae2-45aa-8dad-8420e1432943 DOWN ae:8f:a6:15:f3:a3 access veth3 ef0e0094-12aa-4fdd-96be-09326317f2d0 DOWN 36:18:34:87:f3:97 access

havok4u commented 5 years ago

Some missing information here, this is running in a KVM on a machine. The KVM OS Info is as follows: $ uname -a Linux xdptest1 4.15.0-55-generic #60-Ubuntu SMP Tue Jul 2 18:22:20 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

$ cat /etc/os-release NAME="Ubuntu" VERSION="18.04.2 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.2 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic

$ kvm-ok INFO: /dev/kvm exists KVM acceleration can be used

The host machine OS is as follows: $ uname -a Linux node41 4.4.0-154-generic #181-Ubuntu SMP Tue Jun 25 05:29:03 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

$ cat /etc/os-release NAME="Ubuntu" VERSION="16.04.6 LTS (Xenial Xerus)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 16.04.6 LTS" VERSION_ID="16.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial

havok4u commented 5 years ago

BTW, Just to reiterate, I tried what @mauriciovasquezbernal suggested, it did not work. Also wanted to show that the two interfaces from the OS level are up, but obviously down from the polycube standpoint: $ ip link show veth1 15: veth1@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 86:83:02:57:b4:1b brd ff:ff:ff:ff:ff:ff link-netnsid 0 $ ip link show veth1 15: veth1@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 86:83:02:57:b4:1b brd ff:ff:ff:ff:ff:ff link-netnsid 0

havok4u commented 5 years ago

@mauriciovasquezbernal Sorry, I didn't remove the interfaces before I performed your commands. I deleted the interfaces and added them back in the way you instructed and they are up.
$ polycubectl br0 ports show mac mode name peer status uuid aa:f0:84:c6:db:42 access veth1 veth1 UP 6f1f1a31-3bbe-46ed-ad2c-183d0feae485 8a:72:c0:80:75:a8 access veth3 veth3 UP 222a1540-7c84-4b54-b552-dd429ad80b40

I think we are ok to close this case, but I would like to understand why we would need to add the peer for just and interface? Also, if this is the case, can we get a fail and error message if we try to add the interface without defining a peer. Thanks

goldenrye commented 5 years ago

$ polycubectl br0 ports add veth1 peer=veth1 The first "veth1" is the name of port -- an attribute of bridge br0, and the second "veth1" refers to the veth1 interface in the host, to avoid the confusion I think it is better to configure like the following command: $ polycubectl br0 ports add p1 peer=veth1

We cannot fail the configuration without peer parameter because the port not necessary has a interface peer, for example if a bridge connects to a router, either the bridge or the router's port has to be configured without a peer because the other peer may not configure yet.

mauriciovasquezbernal commented 5 years ago

@havok4u I think this https://polycube.readthedocs.io/en/latest/cubes.html can clarify your doubts about the peer parameter in the ports.

I am closing this as this is really not an issue.

btw, @acloudiator I think the good first issue label should only be added when the issue is confirmed, otherwise a newcomer could get confused trying to solve something that is already working.

havok4u commented 5 years ago

Awesome the readthedocs works. I'll read over it. Currently having issues making the bridge mode of xdp_drv work. When I use it, nothing passes. Also the xdp_skb has been sketchy if it works every time as well. I'll look to see if you address that in the readthedocs. Thank

On Tue, Aug 20, 2019 at 8:29 PM Mauricio Vásquez notifications@github.com wrote:

@havok4u https://github.com/havok4u I think this https://polycube.readthedocs.io/en/latest/cubes.html can clarify your doubts about the peer parameter in the ports.

I am closing this as this is really not an issue.

btw, @acloudiator https://github.com/acloudiator I think the good first issue label should only be added when the issue is confirmed, otherwise a newcomer could get confused trying to solve something that is already working.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/polycube-network/polycube/issues/202?email_source=notifications&email_token=AA2VWWDDA2PLOK35PIHH7HTQFSK55A5CNFSM4II2AZO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4YEOCI#issuecomment-523257609, or mute the thread https://github.com/notifications/unsubscribe-auth/AA2VWWALUXZFOF2CFAI67HTQFSK55ANCNFSM4II2AZOQ .

frisso commented 5 years ago

@sebymiano I remember that you tested the bridge in XDP mode. Did you note any issues? Suggestions?

havok4u commented 5 years ago

So when in polycube TC bridge mode, all works fine. When I go into xdp_skb mode the namespaces do not communicate other than pings (also cannot tcpdump the other veth pair. Is XDP_SKB mode ok to use for veths?

The XDP_DRV mode I can see doesn't do veths (which makes sense), so I put the Physical 10Gs in the bridge, I'll have to setup an external traffic gen to test there. Problem is I can see one 10G going on that bridge with veths or taps (the guest apps), but when I put a veth in an xdp_drv bridge it fails to put them in. Below is my test results:

LINUBRIDGE -------------------------------------------------------------------------------------------------

ip netns exec myapp1 ip -4 -o addr show

10: veth2 inet 10.1.1.2/24 scope global veth2\ valid_lft forever preferred_lft forever

ip netns exec myapp2 ip -4 -o addr show

12: veth4 inet 10.1.1.3/24 scope global veth4\ valid_lft forever preferred_lft forever

root@node42:/home/tepkes# ip netns exec myapp1 ping 10.1.1.3 PING 10.1.1.3 (10.1.1.3) 56(84) bytes of data. 64 bytes from 10.1.1.3: icmp_seq=1 ttl=64 time=0.064 ms 64 bytes from 10.1.1.3: icmp_seq=2 ttl=64 time=0.044 ms ^C --- 10.1.1.3 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1013ms

brctl show

bridge name bridge id STP enabled interfaces bridge0 8000.4a06cb3e461d yes bridge0-nic veth1 veth3

POLYCUBE TC ---------------------------------------------------------------------------------------------

polycubectl br0 ports add veth1 peer=veth1

root@node42:/home/tepkes# polycubectl br0 ports add veth3 peer=veth3 root@node42:/home/tepkes# polycubectl br0 show name: br0 uuid: 369064f8-724b-49ab-b9ac-d0aa2963d6fb service-name: bridge type: TC loglevel: INFO shadow: false span: false stp-enabled: false mac: 7e:92:80:10:02:c0 fdb: aging-time: 300

ports: name uuid status peer mac mode veth1 5a58e1fa-1034-47c0-97d4-d852000efceb UP veth1 86:dd:0d:c5:39:1c access veth3 704e5da9-c3f5-4e3a-b9d8-96958893100e UP veth3 86:a9:65:e8:b5:34 access

ip netns exec myapp1 ping 10.1.1.3

PING 10.1.1.3 (10.1.1.3) 56(84) bytes of data. 64 bytes from 10.1.1.3: icmp_seq=1 ttl=64 time=0.423 ms 64 bytes from 10.1.1.3: icmp_seq=2 ttl=64 time=0.055 ms 64 bytes from 10.1.1.3: icmp_seq=3 ttl=64 time=0.050 ms ^C --- 10.1.1.3 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2025ms rtt min/avg/max/mdev = 0.050/0.176/0.423/0.174 ms

NOTES:

tcpdump -ni veth1

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on veth1, link-type EN10MB (Ethernet), capture size 262144 bytes 15:50:40.600598 IP 10.1.1.2 > 10.1.1.3: ICMP echo request, id 2947, seq 9, length 64 15:50:40.600643 IP 10.1.1.3 > 10.1.1.2: ICMP echo reply, id 2947, seq 9, length 64 15:50:41.624614 IP 10.1.1.2 > 10.1.1.3: ICMP echo request, id 2947, seq 10, length 64 15:50:41.624651 IP 10.1.1.3 > 10.1.1.2: ICMP echo reply, id 2947, seq 10, length 64 15:50:42.648608 IP 10.1.1.2 > 10.1.1.3: ICMP echo request, id 2947, seq 11, length 64 15:50:42.648644 IP 10.1.1.3 > 10.1.1.2: ICMP echo reply, id 2947, seq 11, length 64 ^C 6 packets captured 6 packets received by filter 0 packets dropped by kernel

tcpdump -ni veth3

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on veth3, link-type EN10MB (Ethernet), capture size 262144 bytes 15:50:48.792624 IP 10.1.1.2 > 10.1.1.3: ICMP echo request, id 2947, seq 17, length 64 15:50:48.792651 IP 10.1.1.3 > 10.1.1.2: ICMP echo reply, id 2947, seq 17, length 64 15:50:49.816613 IP 10.1.1.2 > 10.1.1.3: ICMP echo request, id 2947, seq 18, length 64 15:50:49.816634 IP 10.1.1.3 > 10.1.1.2: ICMP echo reply, id 2947, seq 18, length 64

ip netns exec myapp1 iperf3 -c 10.1.1.3

Connecting to host 10.1.1.3, port 5201 [ 4] local 10.1.1.2 port 33824 connected to 10.1.1.3 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 4.02 GBytes 34.5 Gbits/sec 0 720 KBytes

[ 4] 1.00-2.00 sec 4.14 GBytes 35.6 Gbits/sec 0 720 KBytes

[ 4] 2.00-3.00 sec 4.16 GBytes 35.8 Gbits/sec 0 720 KBytes

[ 4] 3.00-4.00 sec 4.19 GBytes 36.0 Gbits/sec 0 720 KBytes

[ 4] 4.00-5.00 sec 4.23 GBytes 36.4 Gbits/sec 0 720 KBytes

[ 4] 5.00-6.00 sec 4.17 GBytes 35.8 Gbits/sec 0 720 KBytes

[ 4] 6.00-7.00 sec 4.16 GBytes 35.8 Gbits/sec 0 720 KBytes

[ 4] 7.00-8.00 sec 4.16 GBytes 35.7 Gbits/sec 0 720 KBytes

[ 4] 8.00-9.00 sec 4.16 GBytes 35.7 Gbits/sec 0 720 KBytes

[ 4] 9.00-10.00 sec 4.17 GBytes 35.8 Gbits/sec 0 720 KBytes


[ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 41.6 GBytes 35.7 Gbits/sec 0 sender [ 4] 0.00-10.00 sec 41.6 GBytes 35.7 Gbits/sec receiver

iperf Done.

ip netns exec myapp2 iperf3 -s -B 10.1.1.3


Server listening on 5201

Accepted connection from 10.1.1.2, port 33822 [ 5] local 10.1.1.3 port 5201 connected to 10.1.1.2 port 33824 [ ID] Interval Transfer Bandwidth [ 5] 0.00-1.00 sec 3.85 GBytes 33.1 Gbits/sec [ 5] 1.00-2.00 sec 4.14 GBytes 35.5 Gbits/sec [ 5] 2.00-3.00 sec 4.19 GBytes 36.0 Gbits/sec [ 5] 3.00-4.00 sec 4.16 GBytes 35.7 Gbits/sec [ 5] 4.00-5.00 sec 4.23 GBytes 36.4 Gbits/sec [ 5] 5.00-6.00 sec 4.17 GBytes 35.8 Gbits/sec [ 5] 6.00-7.00 sec 4.16 GBytes 35.7 Gbits/sec [ 5] 7.00-8.00 sec 4.16 GBytes 35.8 Gbits/sec [ 5] 8.00-9.00 sec 4.16 GBytes 35.8 Gbits/sec [ 5] 9.00-10.00 sec 4.16 GBytes 35.8 Gbits/sec [ 5] 10.00-10.04 sec 178 MBytes 37.3 Gbits/sec


[ ID] Interval Transfer Bandwidth [ 5] 0.00-10.04 sec 0.00 Bytes 0.00 bits/sec sender [ 5] 0.00-10.04 sec 41.6 GBytes 35.6 Gbits/sec receiver

Server listening on 5201

POLYCUBE XDP_SKB ----------------------------------------------------------------------------------------------

polycubectl br1 ports add veth1 peer=veth1

polycubectl br1 ports add veth3 peer=veth3

polycubectl br1 show

name: br1 uuid: ec949119-7e9c-42a3-b8e7-158f21e65757 service-name: bridge type: XDP_SKB loglevel: INFO shadow: false span: false stp-enabled: false mac: 92:a0:ca:24:95:8c fdb: aging-time: 300

ports: name uuid status peer mac mode veth1 f662f905-4f52-4b2a-96ff-afd178685854 UP veth1 92:a3:bf:af:86:97 access veth3 58e3fd10-6a2b-49d6-ad8b-56797849df2d UP veth3 36:c1:70:ca:11:d1 access

ip netns exec myapp1 ping 10.1.1.3

PING 10.1.1.3 (10.1.1.3) 56(84) bytes of data. 64 bytes from 10.1.1.3: icmp_seq=1 ttl=64 time=0.326 ms 64 bytes from 10.1.1.3: icmp_seq=2 ttl=64 time=0.050 ms 64 bytes from 10.1.1.3: icmp_seq=3 ttl=64 time=0.048 ms 64 bytes from 10.1.1.3: icmp_seq=4 ttl=64 time=0.048 ms ^C --- 10.1.1.3 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3073ms rtt min/avg/max/mdev = 0.048/0.118/0.326/0.120 ms

NOTE: tcpdump on interface veth1 and veth3 shows nothing even though pings are making it through and IPERF3 does not work at all.

On Fri, Aug 23, 2019 at 1:13 PM Fulvio Risso notifications@github.com wrote:

@sebymiano https://github.com/sebymiano I remember that you tested the bridge in XDP mode. Did you note any issues? Suggestions?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/polycube-network/polycube/issues/202?email_source=notifications&email_token=AA2VWWC45H5OCU2CBUXBRMDQGASEDA5CNFSM4II2AZO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5A6IDQ#issuecomment-524411918, or mute the thread https://github.com/notifications/unsubscribe-auth/AA2VWWAILK6OIVW32AXJ3S3QGASEDANCNFSM4II2AZOQ .

sebymiano commented 5 years ago

@frisso Unfortunately, I have always tested the simplebridge service in either the TC or XDP_DRV mode, but not in the XDP_SKB mode. @havok4u Could you please try the same test with the simplebridge? Just to understand in the problem is within the service or something else.

havok4u commented 5 years ago

I will do that. Give me a few and I'll get you some results. Thanks

On Sat, Aug 24, 2019 at 12:19 PM Sebastiano Miano notifications@github.com wrote:

@frisso https://github.com/frisso Unfortunately, I have always tested the simplebridge service in either the TC or XDP_DRV mode, but not in the XDP_SKB mode. @havok4u https://github.com/havok4u Could you please try the same test with the simplebridge? Just to understand in the problem is within the service or something else.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/polycube-network/polycube/issues/202?email_source=notifications&email_token=AA2VWWBP2JJWUHLFUJ4TPNDQGFUQTA5CNFSM4II2AZO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5CEEHA#issuecomment-524567068, or mute the thread https://github.com/notifications/unsubscribe-auth/AA2VWWAVMBZZY6DKOMXBZRLQGFUQTANCNFSM4II2AZOQ .

havok4u commented 5 years ago

OK interesting results but very similar. Obviously I could not test XDP_DRV as I have no in/out generator that is external to the system to have 2 physical interfaces put in.

GIVEN FOR ALL TESTS

NAMESPACE: myapp1 10: veth2@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 9a:4b:77:26:7a:18 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.1.1.2/24 scope global veth2 valid_lft forever preferred_lft forever inet6 fe80::984b:77ff:fe26:7a18/64 scope link valid_lft forever preferred_lft forever

NAMESPACE: myapp2 12: veth4@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 76:43:7d:40:96:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.1.1.3/24 scope global veth4 valid_lft forever preferred_lft forever inet6 fe80::7443:7dff:fe40:9650/64 scope link valid_lft forever preferred_lft forever

TC TEST

SETUP

name: sbr0 uuid: 3b18b950-4f45-4dde-88ab-0659eb5d2a99 service-name: simplebridge type: TC loglevel: INFO shadow: false span: false fdb: aging-time: 300

ports: name uuid status peer veth1 ccec530e-1e4a-4b6b-a166-e766b8d34100 UP veth1 veth3 c172b7ab-14b7-4a7f-babb-51b4efd85f7e UP veth3

RESULTS:

$ sudo ip netns exec myapp1 ping 10.1.1.3 PING 10.1.1.3 (10.1.1.3) 56(84) bytes of data. 64 bytes from 10.1.1.3: icmp_seq=1 ttl=64 time=0.259 ms 64 bytes from 10.1.1.3: icmp_seq=2 ttl=64 time=0.055 ms 64 bytes from 10.1.1.3: icmp_seq=3 ttl=64 time=0.046 ms ^C --- 10.1.1.3 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2041ms rtt min/avg/max/mdev = 0.046/0.120/0.259/0.098 ms

$ sudo ip netns exec myapp1 iperf3 -c 10.1.1.3 Connecting to host 10.1.1.3, port 5201 [ 4] local 10.1.1.2 port 34128 connected to 10.1.1.3 port 5201 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 4.36 GBytes 37.4 Gbits/sec 0 634 KBytes

[ 4] 1.00-2.00 sec 4.42 GBytes 38.0 Gbits/sec 0 634 KBytes

[ 4] 2.00-3.00 sec 4.35 GBytes 37.3 Gbits/sec 0 707 KBytes

[ 4] 3.00-4.00 sec 4.33 GBytes 37.2 Gbits/sec 0 707 KBytes

[ 4] 4.00-5.00 sec 4.45 GBytes 38.2 Gbits/sec 0 748 KBytes

[ 4] 5.00-6.00 sec 4.43 GBytes 38.1 Gbits/sec 0 748 KBytes

[ 4] 6.00-7.00 sec 4.46 GBytes 38.3 Gbits/sec 0 748 KBytes

[ 4] 7.00-8.00 sec 4.53 GBytes 38.9 Gbits/sec 0 748 KBytes

[ 4] 8.00-9.00 sec 4.37 GBytes 37.6 Gbits/sec 0 748 KBytes

[ 4] 9.00-10.00 sec 4.44 GBytes 38.2 Gbits/sec 0 748 KBytes


[ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 44.1 GBytes 37.9 Gbits/sec 0 sender [ 4] 0.00-10.00 sec 44.1 GBytes 37.9 Gbits/sec receiver

iperf Done.

XDP_SKB TEST

SETUP

name: sbr1 uuid: e4ae590e-4155-4aed-a901-63277d8a2760 service-name: simplebridge type: XDP_SKB loglevel: INFO shadow: false span: false fdb: aging-time: 300

ports: name uuid status peer veth1 8b603e53-4e7e-4c78-baac-0f59564d852a UP veth1 veth3 b5db817b-5167-4942-a807-bba0a4a56298 UP veth3

RESULTS:

$ sudo ip netns exec myapp1 ping 10.1.1.3 PING 10.1.1.3 (10.1.1.3) 56(84) bytes of data. 64 bytes from 10.1.1.3: icmp_seq=1 ttl=64 time=0.353 ms 64 bytes from 10.1.1.3: icmp_seq=2 ttl=64 time=0.044 ms 64 bytes from 10.1.1.3: icmp_seq=3 ttl=64 time=0.043 ms ^C --- 10.1.1.3 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2049ms rtt min/avg/max/mdev = 0.043/0.146/0.353/0.146 ms

IPERF TEST FAILS, TCPDUMP REVIELS THE FOLLOWING

tcpdump -ni veth1

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on veth1, link-type EN10MB (Ethernet), capture size 262144 bytes 15:20:08.658205 IP 10.1.1.2.34224 > 10.1.1.3.5201: Flags [S], seq 2676414207, win 29200, options [mss 1460,sackOK,TS val 1061499746 ecr 0,nop,wscale 7], length 0 15:20:09.688573 IP 10.1.1.2.34224 > 10.1.1.3.5201: Flags [S], seq 2676414207, win 29200, options [mss 1460,sackOK,TS val 1061500776 ecr 0,nop,wscale 7], length 0 15:20:11.704565 IP 10.1.1.2.34224 > 10.1.1.3.5201: Flags [S], seq 2676414207, win 29200, options [mss 1460,sackOK,TS val 1061502792 ecr 0,nop,wscale 7], length 0 15:20:15.736556 IP 10.1.1.2.34224 > 10.1.1.3.5201: Flags [S], seq 2676414207, win 29200, options [mss 1460,sackOK,TS val 1061506824 ecr 0,nop,wscale 7], length 0 ^C 4 packets captured 4 packets received by filter 0 packets dropped by kernel

OR

tcpdump -ni veth1 -vvv

tcpdump: listening on veth1, link-type EN10MB (Ethernet), capture size 262144 bytes 15:21:10.330160 IP (tos 0x0, ttl 64, id 59881, offset 0, flags [DF], proto TCP (6), length 60) 10.1.1.2.34226 > 10.1.1.3.5201: Flags [S], cksum 0x1635 (incorrect -> 0xf7be), seq 1558080953, win 29200, options [mss 1460,sackOK,TS val 1061561417 ecr 0,nop,wscale 7], length 0 15:21:11.352560 IP (tos 0x0, ttl 64, id 59882, offset 0, flags [DF], proto TCP (6), length 60) 10.1.1.2.34226 > 10.1.1.3.5201: Flags [S], cksum 0x1635 (incorrect -> 0xf3c0), seq 1558080953, win 29200, options [mss 1460,sackOK,TS val 1061562439 ecr 0,nop,wscale 7], length 0 15:21:13.368561 IP (tos 0x0, ttl 64, id 59883, offset 0, flags [DF], proto TCP (6), length 60) 10.1.1.2.34226 > 10.1.1.3.5201: Flags [S], cksum 0x1635 (incorrect -> 0xebe0), seq 1558080953, win 29200, options [mss 1460,sackOK,TS val 1061564455 ecr 0,nop,wscale 7], length 0 15:21:17.432569 IP (tos 0x0, ttl 64, id 59884, offset 0, flags [DF], proto TCP (6), length 60) 10.1.1.2.34226 > 10.1.1.3.5201: Flags [S], cksum 0x1635 (incorrect -> 0xdc00), seq 1558080953, win 29200, options [mss 1460,sackOK,TS val 1061568519 ecr 0,nop,wscale 7], length 0

NOTE: A TCPDUMP OF VETH3 SHOWS NOTHING EVER TRAVERSING THE INTERFACE

On Sat, Aug 24, 2019 at 3:05 PM Tim Epkes tim.epkes@gmail.com wrote:

I will do that. Give me a few and I'll get you some results. Thanks

On Sat, Aug 24, 2019 at 12:19 PM Sebastiano Miano < notifications@github.com> wrote:

@frisso https://github.com/frisso Unfortunately, I have always tested the simplebridge service in either the TC or XDP_DRV mode, but not in the XDP_SKB mode. @havok4u https://github.com/havok4u Could you please try the same test with the simplebridge? Just to understand in the problem is within the service or something else.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/polycube-network/polycube/issues/202?email_source=notifications&email_token=AA2VWWBP2JJWUHLFUJ4TPNDQGFUQTA5CNFSM4II2AZO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5CEEHA#issuecomment-524567068, or mute the thread https://github.com/notifications/unsubscribe-auth/AA2VWWAVMBZZY6DKOMXBZRLQGFUQTANCNFSM4II2AZOQ .

mauriciovasquezbernal commented 5 years ago

I am not able to go through the full logs you provided, these are some clarifications:

Just as a summary:

[1] https://www.spinics.net/lists/xdp-newbies/msg00440.html [2] https://www.netdevconf.org/0x13/session.html?talk-veth-xdp

havok4u commented 5 years ago

That makes a lot of sense. I figured that XDP_DRV would not work onn veths and taps (restricted to only supported cards). It does raise a question though, if I make a bridge that has a supported NIC interface and then distributes out to veth and tap interfaces, how do I make that happen? Do I create an XDP_DRV bridge (cube) for the NIC interface and connect that bridge/cube into another bridge/cube in XDP_SKB mode to ensure outside requests can go to a VM or container based infrastructure?

Thanks

On Sun, Aug 25, 2019 at 3:35 PM Mauricio Vásquez notifications@github.com wrote:

I am not able to go through the full logs you provided, these are some clarifications:

  • There is an issue in XDP_SKB mode (also called Generic XDP) that causes XDP programs to drop TCP packets. It has already been discussed by @mbertrone https://github.com/mbertrone in [1]
  • XDP_DRV (also called Native XDP) does not work with veth interfaces in the way we use it (the packet originates on a namespace and should be forwarded to another namespace). The native support for XDP in veth was created with other use cases in mind, in a nutshell, when a packet comes from the external world (i.e, received through a physical interface) it could be delivered very fast to a namespace (container) that has an XDP program in its network interface. Many more details are available at [2]. To clarify, we have use the different services with XDP_DRV and physical interfaces and they work.
  • XDP programs are executed quite early (even before allocating the SKB), for this reason tcpdump is not able to capture packets on interfaces that have XDP programs and they perform a redirect action.

Just as a summary:

  • TC mode works fine with physical and virtual interfaces.
  • XDP_SKB mode is mostly intended for debugging and developing purposes.
  • XDP_DRV more is the fasted one and should be only used with physical interfaces that support it.

[1] https://www.spinics.net/lists/xdp-newbies/msg00440.html [2] https://www.netdevconf.org/0x13/session.html?talk-veth-xdp

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/polycube-network/polycube/issues/202?email_source=notifications&email_token=AA2VWWCACPNMKVJMZ3X5YATQGLUJVA5CNFSM4II2AZO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5C3HOI#issuecomment-524661689, or mute the thread https://github.com/notifications/unsubscribe-auth/AA2VWWFGC6DX4QCWEH3LXC3QGLUJVANCNFSM4II2AZOQ .

mauriciovasquezbernal commented 5 years ago

Using an XDP_DRV bridge would only make sense if there are XDP programs attached to the veth interfaces inside the containers. This would allow to pass the frame from the physical NIC to the veth in the container without performing an SKB allocation.

If you have standard applications in the container (by standard I mean any non XDP program) the XDP_DRV mode would not make sense because the SKB would have to be allocated anyway, so there is not any performance advantage in this case, hence a TC bridge could be used.

As a last comment, this is not possible to connect ports on cubes of different type, so unfortunately your idea will not work in this case.

havok4u commented 5 years ago

If I hear you right, there is no difference in performance between TC mode and XDP_SKB mode? if that is the case, why would I need XDP_SKB mode at all?

On Tue, Aug 27, 2019 at 5:17 PM Mauricio Vásquez notifications@github.com wrote:

Using an XDP_DRV bridge would only make sense if there are XDP programs attached to the veth interfaces inside the containers. This would allow to pass the frame from the physical NIC to the veth in the container without performing an SKB allocation.

If you have standard applications in the container (by standard I mean any non XDP program) the XDP_DRV mode would not make sense because the SKB would have to be allocated anyway, so there is not any performance advantage in this case, hence a TC bridge could be used.

As a last comment, this is not possible to connect ports on cubes of different type, so unfortunately your idea will not work in this case.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/polycube-network/polycube/issues/202?email_source=notifications&email_token=AA2VWWCY4V6IYG7ADSR345DQGWRZLA5CNFSM4II2AZO2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5JJDAY#issuecomment-525504899, or mute the thread https://github.com/notifications/unsubscribe-auth/AA2VWWD3LQRV3OCQXILX4FTQGWRZLANCNFSM4II2AZOQ .

mauriciovasquezbernal commented 5 years ago

I didn't say this explicitly but you're right, the performance of XDP_SKB and TC should be similar, XDP_SKB is mainly intended for testing, that's the reason for keeping it.

In production only TX or XDP_DRV modes should be used.