The main point of this issue is to propose the addition of a Kafka queue (or similar technology, we still have to discuss the details) where the operations to execute in Wikibase will be pushed and then handled by the Wikibase adapter. Although this will add a new layer of complexity (bigger on smaller depending on the final solution details), we will also get several benefits. Some of them are:
Better decoupling between the generation of operations and their consumption.
Ability to let the execution of operations running in the background once they have been created.
If there are multiple requests made in a short span of time, we would above potential problems where operations could be executed in a non-valid order and result in a non-consistent triple-store state.
In general, I really like the idea and I think it could benefit the system considerably.
Doesn't need to be a kafka queue nor any specific technology, just a buffer that, as you perfectly explained, decouples the generator and the consumer of operations, not only the code but also the execution. 👍
Thank you to @thewilly for the original idea.
The main point of this issue is to propose the addition of a Kafka queue (or similar technology, we still have to discuss the details) where the operations to execute in Wikibase will be pushed and then handled by the Wikibase adapter. Although this will add a new layer of complexity (bigger on smaller depending on the final solution details), we will also get several benefits. Some of them are:
In general, I really like the idea and I think it could benefit the system considerably.