Open kevinGC opened 2 days ago
Earlier this year I ran experiments across lots of different frame/block sizes with tPACKET v3. The conclusion I came to was that PACKET_MMAP trades off raw packet throughput for CPU efficiency. This is in line with its stated goal of being a "more efficient way to capture packets", not necessarily tx/rx packets like we want it to. Waiting for a block to fill up was always slower than getting them from rcvmmsg. Feel free to take a crack at it, totally possible I missed something. But just a warning: HOURS_WASTED_HERE=~100
.
IIUC that's specific to V3. I am thinking that PACKET_MMAP improvements would be for the existing V2 interface we use.
Yeah could be. I definitely did less testing on V2 since I thought v3 would be the answer. FWIW when I tried to increase the number of frames (in v2) I didn't see any effect.
Description
The fdbased LinkEndpoint supports PACKET_MMAP, but is missing a few useful improvements:
sendmmsg
, but should use PACKET_MMAP buffers.Is this feature related to a specific bug?
No
Do you have a specific solution in mind?
I believe we only have 32 slots (
tpFrameNR
) in the dispatcher. Could that be limiting the number of returned packets?