Open lyuxiaosu opened 1 month ago
I forgot to mention that the DPDK version I used is 19.11.5
Not sure if it is related to Mellanox CX5 NIC. This said Mellanox CX5 will drop packets when RX queues greater than 32, and in my case, it will create 64 RX queues. It works very well with kMaxQueuesPerPort=32 too.
Thanks for bringing it up. Do eRPC's examples and benchmarks (e.g., hello_world
, small_rpc_tput
) work in your cluster?
Thanks for bringing it up. Do eRPC's examples and benchmarks (e.g.,
hello_world
,small_rpc_tput
) work in your cluster?Thanks for bringing it up. Do eRPC's examples and benchmarks (e.g.,
hello_world
,small_rpc_tput
) work in your cluster?
Yes, I tested hello_world
, latency
and server_rate
, these works very well. I didn't try small_rpc_tput
.
Hi Anuj,
I tried to create 64 sessions on the client side to the server side.
num_server_threads
is 1. I tuned some parameters on eRPC, but still doesn't work. In the server side, I tuned the following parameters:In the client side, I tuned the following parameters:
On both side, I set the number of hugepages to 4096 with:
sudo bash -c "echo 4096 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages"
When I started the test, the client printed out the log showing it sent out the correct packets:
but on the server side, it seems all received packets are wrong and dropped:
I spent much time to figure this out, but failed. Is there some parameters I changed wrong or not changed that causes this issue? The NIC card I used is Mellanox CX5. Thanks for your help.