Open Enelar opened 2 years ago
Hi Enelar. To run updates on a DB - simply change values of (non-meta) properties of a data instance (either created a new, or retrieved from a DB via one of the get functions), the engine will know to send update operations to the DB in an optimized way.
If there are concurrent changes/updates to the same instance - the last one entered to the queue will be the one that will reflect the document on the db.
Regarding several services running updates on the same collections/documents - to prevent race-conditions - you should add another messaging layer (like a message server) connected to all the services, the messaging service will use Derive and will be connected to the DB -- all the services will be connected to the messaging service -- instead of sending direct "updates" or other DB operations - they will send "messages" (what to update, what to insert etc.) - then the messaging server will actually trigger the DB operations themselves according to the messages.
You can implement the messaging server as a simple HTTP server with queues, or use more robust ready solutions like RabbitMQ.
I have plans to perhaps write a dedicated messaging service for this purpose.
Further anecdotes on this issue, a year after with new features:
If several DB connections (e.g. two different services) are updating the same document in parallel/same times - MongoDB itself also has its own internal mechanisms - and Mongo's docs states that every operation on the single document level is atomic, see here: https://www.mongodb.com/docs/manual/core/write-operations-atomicity/
If one service updates a document, and another service needs to do an update only after the first update is done, you can use Derive's built in support for ChangeStream (https://www.mongodb.com/docs/manual/changeStreams/)
Each of the data retrieval functions have a boolean collectionWatch
flag argument - if you set it to true - ChangeStream support will be on for the retrieved document --- with this feature on, you could:
On the second service (the one that needs the first one to finish) --- retrieve the relevant document with collectionWatch
true.
Add a listener to its .$_dbEvents.updated
event --- since the collectionWatch is on, it will **also trigger upon updates to the document from any other external source as well. The listener function receives an updatedFields
object as the second argument - with just the updated properties and their values - you can check those and issue the second update accordingly. You can also just check the data object itself (either from the this
value inside the listener function which will be the most recent updated, or the third argument of the listener function which also holds the most recent data object).
In some corner cases I have to be sure that:
Usually I did the temp lock of a document with
updateOne
method. With match pattern like{locked: {$lt: now()}}
and update{$set: {locked: 'now() + 30 sec'}
. Every success update were returning me exclusively used document (for lock time duration). How to achieve same results with your library?How to run updates on database at all?