ntop / PF_RING

High-speed packet processing framework
http://www.ntop.org
GNU Lesser General Public License v2.1
2.7k stars 349 forks source link

Performance Issue with ZC ICE Interface and RSS Configuration #968

Open jolysoul127 opened 5 days ago

jolysoul127 commented 5 days ago

Dear all ( @cardigliano ) According to https://www.ntop.org/pf_ring/introducing-pf_ring-zc-support-for-intel-e810-based-100g-adapters/ pf_ring can transmitting almost 90 Mpps by using a single core. In fact, The 100 Gbps Intel e810-C ZC ICE interface does not achieve that performance, even with RSS properly configured. Adding additional RSS instances significantly impacts the packet sending rate when using pfsend For example:

My System using :
OS : Ubuntu 22.04
CPU : AMD 7950X
RAM : 64 GB
SSD Samsung 990 nvme 
Card Intel e810-C 100 Gbps
uname -a
Linux test-System-Product-Name 6.8.0-48-generic #48~22.04.1-Ubuntu
cat /proc/net/pf_ring/dev/enp1s0f0np0/info 
Name:         enp1s0f0np0
Index:        22
Polling Mode: NAPI/ZC
Promisc:      Disabled
Type:         Ethernet
Family:       Intel ice
TX Queues:    2
RX Queues:    2
Num RX Slots: 4096
Num TX Slots: 4096
RX Slot Size: 1536
TX Slot Size: 1536
lspci | grep -i pcie
[  146.780823] ice 0000:01:00.0: 126.016 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x16 link at 0000:00:01.1 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link)
[  147.021547] ice 0000:01:00.1: 126.016 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x16 link at 0000:00:01.1 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link)

lstopo image

cardigliano commented 4 days ago

@jolysoul127 the fact you get about the same total throughput with 1 and 2 RSS queues, usually indicates there is some bandwidth bottleneck, that can be on memory (please check the channels configuration) or PCIe bus. Please also note that in our tests we usually use Intel as AMD CPUs are unpredictable as performance depends a lot on the configuration (CPUs are NUMA systems, not fully symmetric).

jolysoul127 commented 4 days ago

Dear @cardigliano You are totally right about the bottleneck.

Thank you