Closed nemanja-boric-sociomantic closed 6 years ago
Adapted to @david-eckardt-sociomantic 's proposal with fields.
Updated with private init
and with disallowing the client to stop without going through the back buffer.
A higher level idea: would it be possible to completely encapsulate the buffer handling, so that the reader and record stream fibers don't need to know anything about it? They would just push and pop records to/from the buffer.
Note that the Reader
already doesn't know anything about the buffer. I'll see if I can encapsulate it so the RecordStream
doesn't know anything.
Updated with encapsulation of empty()
method. I've tried making it higher-level, but I'm not happy with the end results. The problem is that now the RecordStream doesn't know anything about buffers, it's not just the point of the swapping, but it also doesn't know when it should send Continue
signal to the node. This requires delegating the send continue signal from the RecordStream
to the buffer class, but that then makes this Buffer
not just maintaining two buffers & swap, but to know that the data needs to be requested and that it doesn't come automatically.
I think right now the complexity is quite low, I wouldn't try going further than this, but I'm open to comments.
Updated with encapsulation of empty() method. I've tried making it higher-level, but I'm not happy with the end results. The problem is that now the RecordStream doesn't know anything about buffers, it's not just the point of the swapping, but it also doesn't know when it should send Continue signal to the node. This requires delegating the send continue signal from the RecordStream to the buffer class, but that then makes this Buffer not just maintaining two buffers & swap, but to know that the data needs to be requested and that it doesn't come automatically.
Fair enough. I think it probably could be done in a nice way, but it's not incredibly important, right now.
Updated with the addressed comments by @gavin-norman-sociomantic .
The previous approach where the client would ask for the batch from the node was prone to the latency problems - if the time to prepare and send batch is non-negligible the performance of the client application would suffer. This patch improves the situation and decreases the latency in the following way: