ansyun / dpdk-ans

ANS(Accelerated Network Stack) on DPDK, DPDK native TCP/IP stack.
https://ansyun.com
BSD 3-Clause "New" or "Revised" License
1.15k stars 322 forks source link

dpdk-iperf3 client not working on Azure #91

Closed ader1990 closed 5 years ago

ader1990 commented 5 years ago

I am using DPDK 18.11 with Linux kernel 4.20.7 on an Azure VM, where I have Mellanox SRIOV nics (accelerated network).

I start the ANS env (looks fine):

./build/ans -c 0x2 -n 2 --pci-whitelist "a666:00:02.0" -- -p 0x1 --config="(0,0,1)"
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: 8 hugepages of size 1073741824 reserved, but no mounted hugetlbfs found for that size
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
EAL: PCI device a666:00:02.0 on NUMA socket 0
EAL:   probe driver: 15b3:1004 net_mlx4
PMD: net_mlx4: PCI information matches, using device "mlx4_0" (VF: true)
PMD: net_mlx4: 1 port(s) detected
PMD: net_mlx4: port 1 MAC address is 00:0d:3a:fd:a8:5e

Start to Init port
         port 0:
         port name net_mlx4:
         max_rx_queues 12288: max_tx_queues:12288
         rx_offload_capa 0x12a0e: tx_offload_capa:0x800e
         Creating queues: rx queue number=1 tx queue number=1...
         MAC Address:00:0D:3A:FD:A8:5E
         Deault-- tx pthresh:0, tx hthresh:0, tx wthresh:0, tx offloads:0x0
         lcore id:1, tx queue id:0, socket id:0
         Conf-- tx pthresh:0, tx hthresh:0, tx wthresh:0, tx offloads:0xe

Allocated mbuf pool on socket 0, mbuf number: 16384

Initializing rx queues on lcore 1 ...
Default-- rx pthresh:0, rx hthresh:0, rx wthresh:0, rx offloads:0x0
Conf-- rx pthresh:0, rx hthresh:0, rx wthresh:0, rx offloads:0xe
port id:0, rx queue id: 0, socket id:0

core mask: 2, sockets number:1, lcore number:1
start to init ans
USER8: LCORE[1] lcore mask 0x2
USER8: LCORE[1] lcore id 1 is enable
USER8: LCORE[1] lcore number 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
add veth0 device, kni id 0
USER8: LCORE[1] Interface veth0 if_capabilities: 0x800e
add IP a000002 on device veth0
show all IPs:

 veth0: mtu 1500
        link/ether 00:0d:3a:fd:a8:5e
        inet addr: 10.0.0.2/24

add static route

ANS IP routing table
 10.0.0.0/24 via dev veth0 src 10.0.0.2
 10.10.0.0/24 via 10.0.0.5 dev veth0

Checking link status done
Port 0 Link Up - speed 40000 Mbps - full-duplex
USER8: main loop on lcore 1
USER8:  -- lcoreid=1 portid=0 rxqueueid=0
hz: 2294700178

Then I start DPDK Iperf3 server (looks fine):

./dpdk_iperf3  -s --bind 10.0.0.2
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_2574_1030ccd9e74
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
USER8: LCORE[-1] anssock any lcore id 0xffffffff
USER8: LCORE[0] anssock app id: 2574
USER8: LCORE[0] anssock app name: dpdk_iperf3
USER8: LCORE[0] anssock app lcoreId: 0
USER8: LCORE[0] mp ops number 4, mp ops index: 0
USER8: LCORE[0] setsockopt: not support optname 2
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

Then I start the DPDK Iperf3 client, which fails:

./iperf3 -c 10.0.0.2
iperf3: error - unable to receive control message: Transport endpoint is not connected

Does someone know what I am doing wrong? Thanks.

bluenet13 commented 5 years ago

You shall run dpdk-iperf3 -c 10.0.0.2 on another VM with another ANS. By the way the IP 10.0.0.2 shall be changed on the client side.

bluenet13 commented 5 years ago

no feadback. close it.

ader1990 commented 5 years ago

You shall run dpdk-iperf3 -c 10.0.0.2 on another VM with another ANS. By the way the IP 10.0.0.2 shall be changed on the client side.

I have deployed two vms on Azure. Each VM has one management nic and one data nic. The data nic on each vm is in the same Azure subnet (the data nics are both connected to the same switch). I started on each VM an ANS environment and on one VM started the dpdk_iperf 3 server and on the other VM started the iperf3 client. The iperf3 client failed (error message bellow):

root@sender:~/dpdk-iperf# ./dpdk_iperf3 -c 10.1.0.2
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_28520_ad41f49827a
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
USER8: LCORE[-1] anssock any lcore id 0xffffffff
USER8: LCORE[2] anssock app id: 28520
USER8: LCORE[2] anssock app name: dpdk_iperf3
USER8: LCORE[2] anssock app lcoreId: 2
USER8: LCORE[2] mp ops number 4, mp ops index: 0
fcntl(F_GETFL): Bad file descriptor
ader1990 commented 5 years ago

Please note that in the same setup I could make testpmd and Openvswitch with DPDK work.

bluenet13 commented 5 years ago

Please ignore this error message "fcntl(F_GETFL): Bad file descriptor". Please share ans startup log on client and server side. Please share dpdk-iperf3 startup log on client and server side.