I noticed some massive performance reductions when looking for TCP throughput
and
transaction latency tests for a client.
It turns out that the server-side replies were being read (mostly-ok) in 1400
byte chunks; but
the client-side IO was being sent in 64 byte chunks.
I figured out that the socket buffer was set to 64 bytes, rather than something
sensible. This was
due to client_socksize being parsed as an int, and "64 KB" turned into "64".
To reproduce:
* Configure client_socksize with anything besides the default.
To fix:
* Change the code to correctly parse the option!
Original issue reported on code.google.com by adrian.c...@gmail.com on 1 Oct 2009 at 3:39
Original issue reported on code.google.com by
adrian.c...@gmail.com
on 1 Oct 2009 at 3:39