pktgen / Pktgen-DPDK

DPDK based packet generator
Other
389 stars 119 forks source link

Pktgen doesn't start the traffic #247

Closed Venkateshv377 closed 6 months ago

Venkateshv377 commented 7 months ago

Hi @pktgen,

I'm using Pktgen(23.10.2) + DPDK(24.03.0-rc4). I'm trying this on Qemu environment where I'm trying to leverage the SR-IOV feature. I have generated 4 VFs, binded with vfio-pci and attached to the Qemu(Ubuntu 22.04), whereas PF is binded to ixgbe driver on host. In VM, I have loaded vfio-pci driver in 'no-iommu' mode and used dpdk-devbind.py script to bind these 4 VFs to vfio-pci driver. Then I'm trying to generate the traffic on 2 of these ports using pktgen application, however I'm unable to see any packets getting generated. Here is the command I have used.

user@ubuntu-vm:~/custom/pding/Pktgen-DPDK$ ./tools/run.py default

sdk '/usr/local/lib/x86_64-linux-gnu', target 'None' <module 'cfg' from 'cfg/default.cfg'> Trying ./usr/local/bin/pktgen Trying /usr/local/bin/pktgen Trying /home/user/custom/pding/Pktgen-DPDK/build/app/pktgen sudo -E /home/user/custom/pding/Pktgen-DPDK/build/app/pktgen -l 2,3-7 -n 4 --proc-type auto --log-level 7 --file-prefix pg -a 00:04.0 -a 00:05.0 -- -v -T -P -m [3:5].0 -m [6:7].1 -f themes/black-yellow.theme [sudo] password for user:

Copyright(c) <2010-2023>, Intel Corporation. All rights reserved. Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<

EAL: Detected CPU lcores: 8 EAL: Detected NUMA nodes: 1 EAL: Auto-detected process type: PRIMARY EAL: Detected shared linkage of DPDK EAL: Multi-process socket /var/run/dpdk/pg/mp_socket EAL: Selected IOVA mode 'PA' EAL: VFIO support initialized ../drivers/bus/pci/pci_common.c +424: pci_probe: dev 0x559d1c5d4010 ../drivers/bus/pci/pci_common.c +240: rte_pci_probe_one_driver, dev->device.numa_node: -1 EAL: Using IOMMU type 8 (No-IOMMU) EAL: Probe PCI driver: net_ixgbe_vf (8086:15c5) device: 0000:00:04.0 (socket -1) ../drivers/bus/pci/pci_common.c +424: pci_probe: dev 0x559d1c5cfa60 ../drivers/bus/pci/pci_common.c +240: rte_pci_probe_one_driver, dev->device.numa_node: -1 EAL: Probe PCI driver: net_ixgbe_vf (8086:15c5) device: 0000:00:05.0 (socket -1) TELEMETRY: No legacy callbacks, legacy socket not created

Packet Max Burst 128/128, RX Desc 1024, TX Desc 2048, mbufs/port 24576, mbuf cache 128 0: net_ixgbe_vf 0 -1 8086:15c5/00:04.0 1: net_ixgbe_vf 0 -1 8086:15c5/00:05.0

=== port to lcore mapping table (# lcores 6) === lcore: 2 3 4 5 6 7 Total port 0: ( D: T) ( 1: 0) ( 0: 0) ( 0: 1) ( 0: 0) ( 0: 0) = ( 1: 1) port 1: ( D: T) ( 0: 0) ( 0: 0) ( 0: 0) ( 1: 0) ( 0: 1) = ( 1: 1) Total : ( 0: 0) ( 1: 0) ( 0: 0) ( 0: 1) ( 1: 0) ( 0: 1) Display and Timer on lcore 2, rx:tx counts per port/lcore

Configuring 2 ports, MBUF Size 10240, MBUF Cache Size 128 Lcore: 3, RX-Only RX_cnt( 1): (pid= 0:qid= 0) 5, TX-Only TX_cnt( 1): (pid= 0:qid= 0) 6, RX-Only RX_cnt( 1): (pid= 1:qid= 0) 7, TX-Only TX_cnt( 1): (pid= 1:qid= 0)

Port : 0, nb_lcores 2, private 0x559d13724f80, lcores: 3 5 1, nb_lcores 2, private 0x559d13afa3c0, lcores: 6 7

Initialize Port 0 -- RxQ 1, TxQ 1 Create: 'Default RX 0:0 ' - Memory used (MBUFs 24576 x size 10240) = 245761 KB Create: 'Special TX 0:0 ' - Memory used (MBUFs 1024 x size 10240) = 10241 KB

Create: 'Default TX  0:0 ' - Memory used (MBUFs  24576 x size  10240) = 245761 KB
Create: 'Range TX    0:0 ' - Memory used (MBUFs  24576 x size  10240) = 245761 KB
Create: 'Rate TX     0:0 ' - Memory used (MBUFs  24576 x size  10240) = 245761 KB
Create: 'Sequence TX 0:0 ' - Memory used (MBUFs  24576 x size  10240) = 245761 KB
                                                       Port memory used = 1239046 KB

Src MAC 02:09:c0:f6:6a:66 Device Info (00:04.0, if_index:0, flags 00000064) min_rx_bufsize : 1024 max_rx_pktlen : 9728 hash_key_size : 40 max_rx_queues : 4 max_tx_queues : 4 max_vfs : 0 max_mac_addrs : 128 max_hash_mac_addrs: 4096 max_vmdq_pools: 64 vmdq_queue_base: 0 vmdq_queue_num : 0 vmdq_pool_base: 0 nb_rx_queues : 1 nb_tx_queues : 1 speed_capa : 00000000

flow_type_rss_offloads:0000000000038d34 reta_size : 64 rx_offload_capa :VLAN_STRIP IPV4_CKSUM UDP_CKSUM TCP_CKSUM VLAN_FILTER SCATTER KEEP_CRC RSS_HASH tx_offload_capa :VLAN_INSERT IPV4_CKSUM UDP_CKSUM TCP_CKSUM SCTP_CKSUM TCP_TSO MULTI_SEGS rx_queue_offload_capa :0000000000000001 tx_queue_offload_capa :0000000000000000 dev_capa :0000000000000000

RX Conf: pthresh : 8 hthresh : 8 wthresh : 0 Free Thresh : 32 Drop Enable : 0 Deferred Start : 0 offloads :0000000000000000 TX Conf: pthresh : 32 hthresh : 0 wthresh : 0 Free Thresh : 32 RS Thresh : 32 Deferred Start : 0 offloads :0000000000000000 Rx: descriptor Limits nb_max : 4096 nb_min : 32 nb_align : 8 nb_seg_max : 0 nb_mtu_seg_max : 0 Tx: descriptor Limits nb_max : 4096 nb_min : 32 nb_align : 8 nb_seg_max : 40 nb_mtu_seg_max : 40 Rx: Port Config burst_size : 0 ring_size : 0 nb_queues : 0 Tx: Port Config burst_size : 0 ring_size : 0 nb_queues : 0 Switch Info: (null) domain_id :65535 port_id : 0

Initialize Port 1 -- RxQ 1, TxQ 1 Create: 'Default RX 1:0 ' - Memory used (MBUFs 24576 x size 10240) = 245761 KB Create: 'Special TX 1:0 ' - Memory used (MBUFs 1024 x size 10240) = 10241 KB

Create: 'Default TX  1:0 ' - Memory used (MBUFs  24576 x size  10240) = 245761 KB
Create: 'Range TX    1:0 ' - Memory used (MBUFs  24576 x size  10240) = 245761 KB
Create: 'Rate TX     1:0 ' - Memory used (MBUFs  24576 x size  10240) = 245761 KB
Create: 'Sequence TX 1:0 ' - Memory used (MBUFs  24576 x size  10240) = 245761 KB

                                                     Port memory used = 1239046 KB

Src MAC 02:09:c0:28:a4:f0 Device Info (00:05.0, if_index:0, flags 00000064) min_rx_bufsize : 1024 max_rx_pktlen : 9728 hash_key_size : 40 max_rx_queues : 4 max_tx_queues : 4 max_vfs : 0 max_mac_addrs : 128 max_hash_mac_addrs: 4096 max_vmdq_pools: 64 vmdq_queue_base: 0 vmdq_queue_num : 0 vmdq_pool_base: 0 nb_rx_queues : 1 nb_tx_queues : 1 speed_capa : 00000000

flow_type_rss_offloads:0000000000038d34 reta_size : 64 rx_offload_capa :VLAN_STRIP IPV4_CKSUM UDP_CKSUM TCP_CKSUM VLAN_FILTER SCATTER KEEP_CRC RSS_HASH tx_offload_capa :VLAN_INSERT IPV4_CKSUM UDP_CKSUM TCP_CKSUM SCTP_CKSUM TCP_TSO MULTI_SEGS rx_queue_offload_capa :0000000000000001 tx_queue_offload_capa :0000000000000000 dev_capa :0000000000000000

RX Conf: pthresh : 8 hthresh : 8 wthresh : 0 Free Thresh : 32 Drop Enable : 0 Deferred Start : 0 offloads :0000000000000000 TX Conf: pthresh : 32 hthresh : 0 wthresh : 0 Free Thresh : 32 RS Thresh : 32 Deferred Start : 0 offloads :0000000000000000 Rx: descriptor Limits nb_max : 4096 nb_min : 32 nb_align : 8 nb_seg_max : 0 nb_mtu_seg_max : 0 Tx: descriptor Limits nb_max : 4096 nb_min : 32 nb_align : 8 nb_seg_max : 40 nb_mtu_seg_max : 40 Rx: Port Config burst_size : 0 ring_size : 0 nb_queues : 0 Tx: Port Config burst_size : 0 ring_size : 0 nb_queues : 0 Switch Info: (null) domain_id :65535 port_id : 0

                                                                  Total memory used = 2478092 KB

=== Display processing on lcore 2 WARNING: Nothing to do on lcore 4: exiting RX processing lcore 3: rx: 1 tx: 0 Using port/qid 0/0 for Rx on lcore id 3

TX processing lcore 5: rx: 0 tx: 1 Using port/qid 0/0 for Tx on lcore id 5

RX processing lcore 6: rx: 1 tx: 0 Using port/qid 1/0 for Rx on lcore id 6

TX processing lcore 7: rx: 0 tx: 1 Using port/qid 1/0 for Tx on lcore id 7

-- Pktgen 23.10.2 (DPDK 24.03.0-rc4) Powered by DPDK (pid:8332) ------------- Pktgen:/> theme stats.dyn.label blue none bold Pktgen:/> theme stats.dyn.values green none off Pktgen:/> theme stats.stat.label magenta none off Pktgen:/> theme stats.stat.values white none off Pktgen:/> theme stats.total.label red none bold Pktgen:/> theme stats.total.data blue none bold Pktgen:/> theme stats.colon blue none bold Pktgen:/> theme stats.rate.count blue none bold Pktgen:/> theme stats.bdf blue none off Pktgen:/> theme stats.mac green none off Pktgen:/> theme stats.ip cyan none off Pktgen:/> theme pktgen.prompt green none off Pktgen:/> cls

image

Irrespective of number VFs attached to dpdk, issue remains same. Since peer end is single physical port, I have configured dst mac as same for both ports. Please let me know what I'm missing here. I want to generate the traffic with l2, l3 properties.

Thanks, venkatesh

KeithWiles commented 7 months ago

Thanks for the note, it appears everything is setup correctly. The only thing I noticed was port 0 uses lcore 3 and 5, it appears lcore 4 is skipped in your configuration. This should not be an issue only something I found odd.

Pktgen is using DPDK for TX of packets and the counters above are from the hardware registers on the NIC port. This seems to suggest the DPDK is not able to send packets via the NIC. The NIC appears to be up and running plus the configuration seems fine as well.

You can add some code the tx only routine in pktgen.c file and see if pktgen is attempting to send packets. If it is attempting to send packets, then something is preventing the packet from being sent.

Venkateshv377 commented 7 months ago

I was not sure about where to make a changes, however looks like the issue is persisted only with VFs, whereas PFs doesn't have this issue. I have tried launching VM by attaching the PF of NIC to Qemu then bind the interface to dpdk in VM. With which I was able to launch the Pktgen and generate the traffic. So I'm doubting whether Pktgen has support to generate traffic on VFs ?

KeithWiles commented 7 months ago

I use VF's all of the time outside of VM's.

Venkateshv377 commented 7 months ago

In my case, I have PF attached to host using ixgbe driver and interface is up and able to ping the destination ip from PF interface. Whereas VFs attached to VM, when VF interfaces are bind with ixgbe, its able to ping the destination IP, however when its bind with dpdk and using Pktgen link seems to be up but traffic doesn't start. Is there any Ping command in Pktgen to verify whether ping works or not

KeithWiles commented 7 months ago

If you do not see the counters moving then it is not sending traffic. Pktgen if enabled will response to a ping request, make sure enable <portlist> process

Venkateshv377 commented 7 months ago

I have tried sending ping packets using ping4 <0> , however that didn't increment the Tx count. Ping from peer end interface increments the Rx count in pktgen, however as there is no Tx packet generates in pktgen, ping from peer end fails with "Destination host unreachable"

tried following commands in Pktgen enable 0 process enable 0 icmp ping4 0

and on peer end used ping -I ens4 1.1.1.20 image

KeithWiles commented 7 months ago

Try the page stats and/or page xstats to see if they could show more information.

Pktgen uses DPDK to send and receive packets if DPDK is not sending the packets this normally means the driver in DPDK or the NIC is dropping the TX packets. Some NICs have safe guards to prevent corrupt packet or spoofed packets from being sent. Some NICs will record these as Error packet some do not.

One of possibilities is the packet is too short to be send (or runts) try increasing the size of the packet to 68 bytes `set 0 size 68' make sure you stop the traffic or restart pktgen change size start sending. The other possible reason the NIC can drop packets is the source MAC address does not match the ports MAC address. I thought we verifyed that point but check again.

Venkateshv377 commented 7 months ago

I relaunched the VM so that I could get the interfaces up with fresh configuration. I have taken the MAC address which was generated part of interface up with ixgbvf(default linux) driver. After launching the Pktgen, used the above mac address to configure the src mac address and destination mac address points to other side of interface. After that performed following commands before I start the traffic, however still I'm unlucky in generating packets on Tx.

user@ubuntu-vm:~$ ifconfig -a ens4: flags=4098<BROADCAST,MULTICAST> mtu 1500 ether fe:fa:3c:c3:07:b0 txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ens5: flags=4098<BROADCAST,MULTICAST> mtu 1500 ether 16:75:43:20:9a:3f txqueuelen 1000 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Both ens4, ens5 are VFs sits on single PF, whereas "d8:84:66:f6:b8:08" is the mac address of peer end interface of the PF.

set 0 src mac fe:fa:3c:c3:07:b0 set 0 dst mac d8:84:66:f6:b8:08 set 0 src ip 1.1.1.20/24 set 0 dst ip 1.1.1.10/24 set 0 process set 0 size 68 set 0 rate 100 set 0 rxburst 64 set 0 txburst 64 set 0 sport 1234 set 0 dport 5678 set 0 prime 1 set 0 type ipv4 set 0 proto udp start 0

Is there a way to debug standalone Tx packet generation process, to know whether Tx packets are generating or not, even though packets are generated, are they getting dropped off in between if so at what level packets are getting discarded.

Why this issue is not seen on PF of the interface and seen only on VF.

KeithWiles commented 7 months ago

Have you tried one of the DPDK example applications or testpmd application to see if they work in this environment? The testpmd application can send traffic, but I do not remember how you use that application.

If testpmd works then I have a problem with Pktgen, but I can't debug your environment in my test setup if that is the case. I have used VF in a non-VM environment before so it is very difficult to debug.

Venkateshv377 commented 7 months ago

I have tried launching dpdk-testpmd application by using following command sudo ./build/app/dpdk-testpmd -l 2-7 -n 4 -a 00:05.0 --file-prefix testpmd18000 --proc-type=auto -- --nb-cores=4 --rxq=4 --txq=4 -i

testpmd> set eth-peer 0 02:09:c0:f6:8e:c3 testpmd> set promisc all on testpmd> set fwd io testpmd> start tx_first 100 testpmd> stop

testpmd> show fwd stats all

------- Forward Stats for RX Port= 0/Queue= 0 -> TX Port= 0/Queue= 0 ------- RX-packets: 0 TX-packets: 447 TX-dropped: 2753

------- Forward Stats for RX Port= 0/Queue= 1 -> TX Port= 0/Queue= 1 ------- RX-packets: 0 TX-packets: 447 TX-dropped: 2753

------- Forward Stats for RX Port= 0/Queue= 2 -> TX Port= 0/Queue= 2 ------- RX-packets: 0 TX-packets: 447 TX-dropped: 2753

------- Forward Stats for RX Port= 0/Queue= 3 -> TX Port= 0/Queue= 3 ------- RX-packets: 0 TX-packets: 447 TX-dropped: 2753

---------------------- Forward statistics for port 0 ---------------------- RX-packets: 0 RX-dropped: 0 RX-total: 0 TX-packets: 0 TX-dropped: 11012 TX-total: 11012

+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++ RX-packets: 0 RX-dropped: 0 RX-total: 0 TX-packets: 0 TX-dropped: 11012 TX-total: 11012 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Even dpdk testpmd command also showing packet getting dropped, irrespective of whether fwd mode is io/mac/txonly

Venkateshv377 commented 6 months ago

There was a problem with IXGBE PF driver, where it has Malicious Driver Detection feature enabled, due to that all the packets generated on VFs were considered as malicious packets hence TX packet drop was seen