pktgen / Pktgen-DPDK

DPDK based packet generator
Other
380 stars 117 forks source link

Does pktgen-dpdk have feature to control PPS rate? #286

Open winnie022 opened 5 days ago

winnie022 commented 5 days ago

Hello,

I have two servers running pktgen-dpdk. One server send packet and the other server receive the packet.

Does pktgen-dpdk have feature to control PPS rate? For example, I want to send 10 MPPS for 10 seconds and then send 20 MPPS for next 10 seconds. There is rate option, but I am not sure how to achieve this.

Thanks,

KeithWiles commented 5 days ago

You can set the rate, but you can't set a duration. If you wrote a Lua script you can control the duration by starting/stopping the traffic via the commands. If you do not want to write a script it means type the commands by hand.

winnie022 commented 5 days ago

Hello, @KeithWiles

Thank you for response. If rate can achieve controlled PPS, it is ok without duration, but how to guarantee certain PPS (e.g., 10 MPPS)? Are there any reference for script about duration by starting/stopping the traffic?

KeithWiles commented 5 days ago

The rate is in percent of the port max wire rate along with the packet size and you have to play with the rate value to get the MPPS you want. If NIC max rate 100Gbits with 64 byte frames, you will need to set the rate to some value to get the correct MPPS. 100GB with 64 byte frames max mpps is about 148.8MPPS so the rate would be less then 10% if I did the math right.

If you look in the scripts directory you can find some lua script using RFC2544 which uses Lua.

winnie022 commented 5 days ago

Hello, @KeithWiles

Thank you for your prompt response. It is really helpful. I will take a look at the script.

I have another observation.

When I use two tx queues like ./pktgen -l 0-33 -- -m [1:2-3].[0] --txd=1024 --rxd=1024 -T on TX nodes, the tx node sends double packets to rx node compared to one tx queue. However, in the rx node, even increasing rx queues (./pktgen -l 0-33 -- -m [2-3:1].[0] --txd=1024 --rxd=1024 -T -v) does not get the sent packets. It got the same number of packet when I use one tx queue and rx queue.

When I replace the rx node with dpdk-testpmd, it got much higher packet on dpdk-testpmd while there are some drops (~ 7M Pakcets are dropped). I am not sure why they are not the same.

Are there any missing flags or options? How to scale-up RX side?

I just generate two flows

   range 0 src port start 0
    range 0 src port min 0
    range 0 src port max 1
    range 0 src port inc 1
    range 0 dst port start 0
    range 0 dst port min 0
    range 0 dst port max 1
    range 0 dst port inc 1

Do you have any ideas?

KeithWiles commented 5 days ago

Please tell me the DPDK and Pktgen versions you are using. I only test latest DPDK and Pktgen versions. From DPDK.org and from github.com/pktgen/Pktgen-DPDK. Also what is the NIC being used?

When sending on two threads the packets are being pulled from the same pool of packets and 2 times is normal for two TX queues, if three then three times. I would not mess with txd/rxd queues the NIC driver normally has good values already. The -l 0-33 is using 34 cores and the command line is only needing 4 cores (0,1,2,3).

To get the RX side to work with more than one queue it will use RSS to distribute the packets across all queues. This means the receiving packets 5-tuples need to be different to make the NIC hardware distribute the packets across all of the RX queues.

winnie022 commented 5 days ago

@KeithWiles I am sorry. I put the details.

Environment:

Pktgen version: 23.10.0
DPDK version: 23.11.0
OS distribution: Ubuntu 24.04.1 LTS
Arch: x86-64
Kernel version: 6.8.0-1014-gcp
NIC: gve

Right. I can change -l 0-4.

In RX node, the packets are well evenly distributed. I can check it with page stats, but half of packet are droplet. In TX node


 Rate/s        ipackets        opackets         ibytes MB         obytes MB          errors
  Q  0:               0        36169728                 0              2170               0
  Q  1:               0        36293888                 0              2177               0

In RX node

 Rate/s        ipackets        opackets         ibytes MB         obytes MB          errors
  Q  0:        15866816               0               952                 0               0
  Q  1:        15445632               0               926                 0               0

If I use dpdk-testpmd instead of pktgen-dpdk on RX side, it shows

Port statistics ====================================
  ######################## NIC statistics for port 0  ########################
  RX-packets: 994240252  RX-missed: 0          RX-bytes:  59654415100
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:     66308573          Rx-bps:  31828115080
  Tx-pps:            0          Tx-bps:            0

I guess I missed some configs or setup on RX node.

KeithWiles commented 5 days ago

Can you please update to the latest Pktgen 24.07.1 and latest DPDK. I had made some performance updates in the latest version.

I would have used `-l 0-3' instead.

winnie022 commented 5 days ago

Sure. I will try it now. The recent is v24.07.
Is it your suggested version?

winnie022 commented 4 days ago

@KeithWiles

I updated Pktgen to 24.07.1 and DPDK to 24.07.0. It worked well and seemed like improving the perf.

However, it still rx does catch up TX PPS. TX


 Rate/s     ipackets     opackets    ibytes MB    obytes MB       errors       bursts
  Q  0:            0     47864928            0         2871            0      1492451
  Q  1:            0     48079008            0         2884            0      1500634

RX

 Rate/s     ipackets     opackets    ibytes MB    obytes MB       errors       bursts
  Q  0:     31701816            0         1902            0            0            0
  Q  1:     31646984            0         1898            0            0            0

Do I need to use cfg file to initialize something? I just started the pktgen-dpdk with ./pktgen -l 0-33 -- -m [1:2-3].[0] --txd=1024 --rxd=1024 -T

KeithWiles commented 4 days ago

I worry about setting the tx/rx descriptor size, did you try this test without having then options?

The above command line means the RX side it using one core and TX is using 2 cores. A single core has a limit to the number of packets it can process for RX and TX. The RX side does have to do a bit more work then TX side. You seem to be able to distribute the RX packet across two queues and I need to understand the full configuration of Pktgen.

Change the command line to ./pktgen -l 0-2 -- -m [1:2].0 -T then try ./pktgen -l 0-4 -- -m [1-2:3-4].0 -T

I use the ./tools/run.py with a cfg file to make it easier to use, but it is up to you.

In what mode are you sending packets in, single, sequence, range or pcap modes?