thenativeweb / node-cqrs-domain

Node-cqrs-domain is a node.js module based on nodeEventStore that. It can be very useful as domain component if you work with (d)ddd, cqrs, eventdenormalizer, host, etc.
http://cqrs.js.org/pages/domain.html
MIT License
269 stars 57 forks source link

Let MongoDB generate the aggregate ID? #132

Closed blissi closed 6 years ago

blissi commented 6 years ago

Hi,

I'm using MongoDB as event store. Is there a way to let MongoDB generate the IDs of the aggregates? It seems that your library generates the IDs on its own, since the data type in the Mongo collection is "string" instead of ObjectID.

I need this so I have reliable, always increasing IDs.

Thanks, Steven

nanov commented 6 years ago

The eventIds ( which are also string - but thats another topic ) have nothing to do with the aggregateIds.

In order to assure multi-db support the chosen type for id is a string.

This does not mean that you cannot generate ObjectIds utilising the idGenerator option on the domain, which would result something like this :

const { ObjectID } = require('mongodb');

....
....

domain.idGenerator(() => new ObjectID().toString());

This will result ObjectID compatible aggregate ids ( ie. stream ids ).

Later by denormalisation you may choose to convert those back to ObjectIDs and store those accordingly.

blissi commented 6 years ago

Thanks for the quick answer. I don't want to generate the IDs in my application, the MongoDB-server should generate them. My CQRS domain application is running multiple times and therefore it won't be guaranteed that new aggregates always have a higher ID when multiple instances of the domain add them to MongoDB at the same time.

nanov commented 6 years ago

This is something that is totally up to the readmodel ( denormalization ) side instead of the write one ( domain ).

The concept of aggregates means that there are no "aggregate" records/documents in your db, the state is rebuilt each time when a command comes - this means that you cannot ( and frankly i don't see a reason to ) let certain db engine generate your ids.

If you explain your problem more in detail, and the reason behind wanting your aggregateIds in this way i could assist you more.

Fancy solutions as distributed id generator might be an option, but it seems your problem is far more trivial.

blissi commented 6 years ago

Ok, I try to explain:

I have an aggregate for a shipping -> when inserted by cqrs-domain, it gets an ID that's generated. The denormalizer inserts a ViewModel for each shipping, too -> the ViewModel gets the ID of the aggregate, too.

Now there is another application that runs a few times per day: it queries the shipping-ViewModels, then queries the carriers for current tracking data of these shippings. To make this efficient, I don't want to query all shippings that are in the database at once -> I want to query only 1000 at once, get the tracking data, and then proceed with the next 1000. For this query to work, I need a sort criteria so I can use MongoDB's limit and skip functions.

nanov commented 6 years ago

Well the way I would solve it is by making use of MongoDB cursor ( stream ) on the tracking-query app side, then you don't have to worry about skip and limit ( which are not recommended anyways for big lists ) and you could sort them by timestamp ( denormalizing the commit stamp of the create event is one way to go or of each event if you want to track the last updated time ). This way you would benefit from back-pressured streams and assure at-least-one query per shipping model.

Another way to go would be to create an IdGeneration service ( dedicated process with mini REST for example, or direct db access ) which would generate sequential ids for your aggregates ( I would persist it with redis ), and use this service to generate distributed consistent ids.

This is just out of the head solutions that should work, there are of course other ways to solve this cleanly utilising sagas and etc.

Bear in mind that theoretically the first solution i've suggested should also work ( maybe inside a dedicated generation service ).

EDIT:

As said, there is no way to let any specific db generate aggregate IDs, as those are not persisted into any db.

blissi commented 6 years ago

@nanov Thanks for your suggestion! I also thought of cursors, but then read that they can time out. Now I found out that you can instruct cursors not to time out with the function noCursorTimeout() -> perfect!