While trying out the code in examples/static_file.rs, I observed that
the response was being sent with transfer-encoding: chunked and that
the body was encoded in 31 byte chunks. A chunk is encoded as
1F\r\n{31 data bytes}\r\n where 1F is the hex representation 31
(the data length). This encoding adds 6 bytes of overhead for each
chunk, or nearly 20% overhead to the data payload.
The previous implementation uses BytesMut::new to create a value with
"length 0 and unspecified capacity". The documentation for
BytesMut::with_capacity states that "If
capacity is under 4 * size_of::<usize>() - 1, then BytesMut will
not allocate. It appears the existing logic was obtaining a buffer
size of 31 bytes, which is what BytesMut can handle internally on my
64-bit system. On a 32-bit system, I expect chunks would be sent as
15 bytes, with 5 bytes (33%) of overheard.
By allocating here, each chunk that is processed will allocate a new
buffer. The buffer size of 8 KiB was chosen arbitrarily, but attempts
to weight per-request overhead against the number of allocations
needed to serve a file.
While trying out the code in
examples/static_file.rs
, I observed that the response was being sent withtransfer-encoding: chunked
and that the body was encoded in 31 byte chunks. A chunk is encoded as1F\r\n{31 data bytes}\r\n
where1F
is the hex representation 31 (the data length). This encoding adds 6 bytes of overhead for each chunk, or nearly 20% overhead to the data payload.The previous implementation uses
BytesMut::new
to create a value with "length 0 and unspecified capacity". The documentation forBytesMut::with_capacity
states that "Ifcapacity
is under4 * size_of::<usize>() - 1
, then BytesMut will not allocate. It appears the existing logic was obtaining a buffer size of 31 bytes, which is whatBytesMut
can handle internally on my 64-bit system. On a 32-bit system, I expect chunks would be sent as 15 bytes, with 5 bytes (33%) of overheard.By allocating here, each chunk that is processed will allocate a new buffer. The buffer size of 8 KiB was chosen arbitrarily, but attempts to weight per-request overhead against the number of allocations needed to serve a file.