Some background: while switching to write_buf in hyper, I noticed the 16 pipelined requests benchmark suffered since the responses where split of 32 buffers (1 headers + 1 body * 16 responses), and so each syscall was only writing half as many responses as when hyper flattens all into a single buffer first.
The easiest knee-jerk reaction is to increase the amount of IoVecs used in TcpStream::write_buf. I arbitrarily picked 64. The man page for writev(2) mentions that the default max in Linux is 1024...
A longer term fix could be to add an associated constant to Buf, to allow an implementer to state how many IoVecs they expect to need.
Some background: while switching to
write_buf
in hyper, I noticed the 16 pipelined requests benchmark suffered since the responses where split of 32 buffers (1 headers + 1 body * 16 responses), and so each syscall was only writing half as many responses as when hyper flattens all into a single buffer first.The easiest knee-jerk reaction is to increase the amount of
IoVec
s used inTcpStream::write_buf
. I arbitrarily picked 64. The man page forwritev(2)
mentions that the default max in Linux is 1024...A longer term fix could be to add an associated constant to
Buf
, to allow an implementer to state how manyIoVec
s they expect to need.