BatchAppend is a new RPC in the Streams API that was added in v21.6.0. It's a bi-directional append: the client emits chunks to the server to commit and the server sends back acknowledgements when it's done.
The real-world application I see for this is for the replicator ("shovel" as we call our internal version): it should work nicely to really push the append throughput, especially since you can commit batches of events belonging to different streams.
This ended up being much more work than I thought that it would be because of how intricate the behavior is: it supports framentation by configuring Spear.append_batch/5 with the options done?: false and passing old :batch_ids. See the tests for some examples.
For the life of me I can't figure out what the :deadline option actually does, even when attempting to read through the EventStoreDB implementation of the BatchAppend handler. I may have to poke around to see if it's just my understanding or a bug
closes #53 closes #45
BatchAppend is a new RPC in the Streams API that was added in v21.6.0. It's a bi-directional append: the client emits chunks to the server to commit and the server sends back acknowledgements when it's done.
The real-world application I see for this is for the replicator ("shovel" as we call our internal version): it should work nicely to really push the append throughput, especially since you can commit batches of events belonging to different streams.
This ended up being much more work than I thought that it would be because of how intricate the behavior is: it supports framentation by configuring
Spear.append_batch/5
with the optionsdone?: false
and passing old:batch_id
s. See the tests for some examples.For the life of me I can't figure out what the
:deadline
option actually does, even when attempting to read through the EventStoreDB implementation of the BatchAppend handler. I may have to poke around to see if it's just my understanding or a bug