Open lukepalmer opened 4 years ago
I believe is related to the parser making many trips into the Async scheduler as it reads each bit of a protocol response from a pipe.
You might want to check out this commit from the time I tried to read the input in bigger chunks using the Async-provided Iobuf (which should theoretically read as much as is in the input buffer). The downside there was that the length of the response is not known in the RESP
protocol, so I had to keep a lot of state in case a response was longer than an Iobuf or the Iobuf contained more than one response.
If you feel like reviving this approach in a hopefully less roundabout way, that would be fantastic.
Actually having a performance benchmark might also be quite neat, to see where the performance is going.
This is partly a note to self and partly a reference should anyone else be interested.
Operations per second for an Orewa client is somewhat low compared to what Redis can do. Client performance appears to be dominated by caml_modify calls which I believe is related to the parser making many trips into the Async scheduler as it reads each bit of a protocol response from a pipe.
A way to improve this would be to use the lower level API for TCP connections that exposes a Bigstring.t via an async-driven callback. This should allow parsing of a response within a single async cycle instead of many.
I may look at this someday if I need it.