pktgen / Pktgen-DPDK

DPDK based packet generator
Other
389 stars 119 forks source link

How to accurently measure latency of DPDK using Pktgen and testpmd #153

Closed vorbrodt closed 1 year ago

vorbrodt commented 1 year ago

I'm trying to do a benchmark of latency using DPKD testpmd and pktgen. I noticed there is a feature for Pktgen called page latency for showing the latency metrics. However, I read in a StackOverflow post that this might not always give the best results. Therefore, I'm wondering of how I can measure the latency (RTT) for a setup like the one below.

Here is the setup I'm using. The packet flow is as follows: Pktgen is generating and transmitting packets to a virtual function (using SR-IOV), which transmits the packet to the physical function on the NIC and then over to another virtual function which is connected to TestPMD. Then the packet travels the same path in reverse back to Pktgen where it is recorded.

image

What I'm now wondering is how I would be able to get the latency (RTT) for this. Is it possible to use page latency or should I be doing something else? So what I'm basically wondering is:

Question 1 - A good enough approach to measure latency reasonably accurate Question 2 - The best way to get the most accurate results

KeithWiles commented 1 year ago

1 - Pktgen uses a CPU instruction (rdtsc) to get the number of CPU cycles and you can get a reasonable latency, but remember this is software doing this work. If you want the best latency values then use some hardware base traffic generator. 2 - The setup above is the correct method.

Make sure you use the enable <portlist> latency command to enable measuring latency for the port(s) sending and receiving the packets. Also make sure to reduce the set <portlist> rate <percent> to a low value so packets are not dropped. I use on my 40G NIC set 0 rate 0.01

The page latency command is used to see the latency values.

Please use the updated code for latency in branch latancy-changes https://github.com/pktgen/Pktgen-DPDK/tree/latency-changes

vorbrodt commented 1 year ago

Thank you for the feedback! Before I close this issue - What are the main differences on the latency-changes branch?

KeithWiles commented 1 year ago

The changes are to allow latency packets to be sent in single, pcap, range, sequence, ... modes. These changes are to allow someone to use pcap mode and be able to send latency packets at the same time. At some point maybe able merge into main.

vorbrodt commented 1 year ago

Okay, seem interesting. I was thinking of using pcap file to record the network traffic, so I guess it might be worth checking out then.

vorbrodt commented 1 year ago

I realized am not fully clear on how to actually record the packets into a pcap file. I would like to record things such as the latency, throughput, packets per second. After having started Pktgen and TestPMD, how do you record the received packets in Pktgen?

I haven't found any way of doing this so far. What I'm currently looking at is if set <portlist> dump <value> can be used for this in Pktgen, I see that the packets are recorded in page log. However, it says in the documentation that Dump the next 1-32 received packets to the screen, so does that mean that it can only dump up to 32 packets?

Is there a way of recording the received packets in Pktgen to a file, so for example latency can be calculated?

KeithWiles commented 1 year ago

You can use: enable|disable <portlist> capture - Enable/disable packet capturing on a portlist, disable to save capture Disable capture on a port to save the data into the current working directory. Pktgen can only capture a small amount of packets in its buffer.

Aniurm commented 7 months ago

I have a question: Does burst or batch processing affect latency?

For example, if the burst size is large in testpmd, when testpmd receives packets, it does not forward them back to pktgen immediately. Instead, testpmd waits until the number of received packets equals the burst size. This introduces unnecessary latency due to the burst.

To avoid this latency when measuring, I think we can run the following commands:

testpmd> set burst 1
Pktgen:/> set 0 burst 1

Now, the packets will be processed individually.

What do you think?

KeithWiles commented 7 months ago

Yes, you are correct the burst can add latency due to the amount of time spent processing the burst of packets. Setting the burst to 1 is the correct way. In Pktgen I suggest setting burst to 1 also for latency.

Aniurm commented 7 months ago

If the burst (batch) size set for testpmd is too small (e.g., 1), testpmd will drop all received packets. The same is true for pktgen; if the burst size is too small, pktgen will not receive packets echoed back by testpmd.

Now I have set the burst size for both to 4, Only send 30 packets per second.ensuring that neither will drop packets. The latency measured is indeed very different from before. Previously, the latency I got was 17us, and now it is only 8.5us.

KeithWiles commented 7 months ago

The burst size for Pktgen is for sending packets and does not effect the receive side. This means the burst will not effect packets being received by Pktgen. For testpmd I do not know the code and can't comment on it.

Some NIC drivers in DPDK can have a threshold on receiving packets. In the Vector based (or SIMD) drivers they process 4 or 8 packets at a time to maintain high performance and it can appear as if a few packets are not received if they fall below the threshold.

Aniurm commented 7 months ago

image

Three things were discovered:

This is strange because Pktgen's Tx burst size affects the receive side.


My config

sudo ./pktgen -l 0,2 -n 4 --proc-type auto -a 03:00.0 -- -P -m "[2].0" -T 

I use a pretty low send rate to send 4~5 packets per second.

03:00.0 Ethernet controller: Mellanox Technologies MT416842 BlueField integrated ConnectX-5 network controller
sudo ./dpdk-testpmd -l 1,3,5 -n 4 --file-prefix pg-receive --proc-type auto -a 82:00.0 -- -i --portlist=0 --nb-cores=2 --port-topology=loop

testpmd is set in macswap mode so it can echo packets back to Pktgen

82:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]

DPDK version: 23.11.0 Pktgen version: 23.10.2

$ sudo ./dpdk-devbind.py -s
0000:03:00.0 'MT416842 BlueField integrated ConnectX-5 network controller a2d2' if=enp3s0f0np0 drv=mlx5_core unused=vfio-pci *Active*
0000:82:00.0 'MT27700 Family [ConnectX-4] 1013' if=enp130s0np0 drv=mlx5_core unused=vfio-pci *Active*
KeithWiles commented 7 months ago

The Rx/Tx burst sizes are used to get the max number of packets received or sent. When asking the DPDK driver to receive N number of packets this is the maximum number of packets Pktgen has asked the DPDK driver to get in one RX request. The DPDK driver should always return zero to a maximum number of packets defined in the RX burst size.

If Pktgen asks for 1 packet then it is the DPDK driver's responsibility to return one packet if it is present on the NIC descriptor ring. As I stated before sometimes the DPDK drivers (not Pktgen) will not return a packet if it falls below some driver threshold.

On the transmit burst size is used to give the DPDK driver a given number of packets to send. The driver must always send the packets unless the TX descriptor ring is full. When the TX descriptor ring is full then the driver must return the number of packets sent and then it becomes the responsibility of Pktgen to attempt resending the non-sent packets util they get sent.

I seem to remember at one time in Pktgen I was freeing the unsent packets and not retrying, but I thought this was fixed a long time ago, maybe it has reappeared.

As the packet size increases it becomes possible for the core to overrun the TX descriptor ring as the packets are taking longer and the packet rate to fill the wire is dropping.

Does this appear to be the problem here?

Aniurm commented 7 months ago

Thank you for your help; the problem is now solved.

My Solution

I reviewed the document concerning rte_eth_rx_burst, which is the function called to receive packets.

https://doc.dpdk.org/api/rte__ethdev_8h.html#a3e7d76a451b46348686ea97d6367f102

Some drivers using vector instructions require that nb_pkts be divisible by 4 or 8, depending on the driver implementation.

Thus, the issue was related to "vector instructions."

I then consulted the DPDK documentation for my NIC driver:

https://doc.dpdk.org/guides/nics/mlx5.html#rx-burst-functions

image

To address this, I added an enabler in my EAL parameter to disable vector instructions:

sudo ./pktgen -l 0,2 -n 4 --proc-type auto -a 03:00.0,class=eth,rx_vec_en=0 -- -P -m "[2].0" -T -j

This adjustment allowed me to receive packets when setting the burst size to 1.

Summary

To obtain accurate latency measurements, it is recommended to set the burst size to 1.

If you are unable to receive any packets after setting the burst size to 1, try disabling vector instructions in your NIC driver.

KeithWiles commented 6 months ago

I am glad you found some documentation related to the comment I made a few days ago on vector based (SIMD) drivers and a possible solution https://github.com/pktgen/Pktgen-DPDK/issues/153#issuecomment-2056899798, which should help a lot of other folks with this type of issue.

Thanks