What steps will reproduce the problem?
1. Set one queue preferably with large size. ethtool -L eth? combined 1;
ethtool -G eth? rx 4096
2. Run the pkt-gen in the sender with high send rate and receiver.
At receiver: ./pkt-gen -i eth? at sender ./pkt-gen -f tx -i eth? -p 1 -c 1 -w 5 -n 500000000 -S ? -D ? -d 10.0.4.0:1024-10.0.11.255:1024 -s 10.0.4.4 -b 2048 -R 14000000
I have attached a version of pkt-gen.c that puts a sequence number in the ip_id
field of ip header at sender and at the receiver it will expect packets in
order. It will print messages if losses happen.
What is the expected output? What do you see instead?
I see many single packet losses. I expect no packet loss. The losses don't
happen when the queue is full. They even don't happen at the beginning of
processing a queue of packets but in the middle of a batch! The losses still
happen if I lower the send rate 10Mpps.
What version of the product are you using? On what operating system?
I use the next branch of netmap (I changed netmap_params in netmap_mem2.c to
work with large queues (4k)). I'm using Ubuntu 14.04.1 3.13.0-45-generic.
My Nics are X520-T2 Intel, 10G cards
Please provide any additional information below.
Looking at the distribution of consecutive losses, 99.9% of loss trains have
size of 1.
Original issue reported on code.google.com by masood.m...@gmail.com on 7 Jul 2015 at 12:10
Original issue reported on code.google.com by
masood.m...@gmail.com
on 7 Jul 2015 at 12:10Attachments: