Open hsanjuan opened 4 years ago
- [ ] TODO: figure out how the API changes: let the user submit many pins that go to a batch one by one, or allow sending many pins all at once and make the batch with that
I think this boils down to the error handling in case the batch is larger than the allowed batch size.
An application that can forge its output like a python generator is usually much more lightweight and can start processing and pushing changes while it's still reading the input.
The decision if the change needs to be aborted if the pushed changes cannot be committed as a whole might be just an issue for certain types of inputs.
This would, while processing, allow for an early commit, opening a new changeset with the critical changes which need to be committed together, writing non-critical changes until it's full or another critical one comes around.
It's also possible to implement QoS this way on the application level, allowing for commits when a time-critical write comes around while writing background stuff earlier in the commit.
On a related note: #1346.
It would be useful for cluster users and scalability to be able to open batches/transactions, add a number of pins (or remove), and then close the batch (at which point the update is sent to the network).
Batches are just a way to group several updates together before sending them out to other peers (different from local-datastore-batching-transactions).
Once #1008 is done, this could be approached, but it will need a bunch of things:
LogPins
vsBatchPin + BatchCommit
.