Closed cairijun closed 1 year ago
Thanks for the report! I agree that's unreasonably large. Using a constant BATCH_SIZE
regardless of GRO does not make sense, and fixing that should address the bulk of the problem. We should probably not allocate 64KiB of buffer per packet, either, though it's not immediately obvious how to reduce that in the common case without imposing an otherwise unnecessary ceiling on PTMUD.
Memory usage (RSS) of our app surges to hunders of MB in a few minutes after startup when building with quinn 0.10.1, comparing to ~50MB with quinn 0.9.X.
Do you have overcommit disabled? vec![0; LARGE]
will not actually put anything into RSS if overcommit is enabled. Or do you actually receive 64KB datagrams? If your MTU is 1500 or even 9000, most of the pages should never be touched and therefore never be put into RSS.
For visibility: a fix for this has been published in 0.10.2.
Memory usage (RSS) of our app surges to hunders of MB in a few minutes after startup when building with quinn 0.10.1, comparing to ~50MB with quinn 0.9.X. With dhat we noticed large memory allocations during the creation
Endpoint
s:We suspect this commit, which increased the max udp payload size to 64KB, combining with the
recv_buf
size calculation,https://github.com/quinn-rs/quinn/blob/0ae7c60b15637d7343410ba1e5cc3151e3814557/quinn/src/endpoint.rs#L673-L678
might be the root cause. On a GRO-enabled kernel we have
udp_state.gro_segments() == 64
, andBATCH_SIZE == 32
, resulting in a 128MBrecv_buf
for eachEndpoint
.