Open neverfox opened 9 years ago
Primarily because they are two very different things. You kinda nailed it with your last sentence - events are simply things that you care to log - cheap and possibly even considered useless at the time that you capture them. Page views, banner clicks, search result stats, and that kind of thing come to mind. Down the line, you generally discover that you are generating enough data that it is actually statistically relevant to your business and you can then figure out how to build an aggregate from the event stream in order to make some kind of business decision.
Another key benefit of separating them is that you have the flexibility of choosing a technology that fits the bill. You can use Datomic as your aggregate, but you can also compose other aggregate databases that might do better at very specific problems - My thinking are things like Solr and RapidMiner. If you treat the aggregate as a second class citizen instead of your true source of truth, it gives you an immense amount of flexibility down the line.
The above point goes for the EventStore itself - it needs to be able to scale well and support stacks of writes, but reads can be fairly expensive and slow, because you really don't have to look at it much.
I hope that kinda answers your question? Feel free to poke at me if you want more, or would like further depth somewhere :)
Those are good points, but I cannot help but think then that Datomic seems like a caviar when fish sticks would do. Do you still find its time capabilities useful in this setup, or are do they become mostly a novelty once you've delegating that role to the Event Store?
My other question concerns the fact that you have events stored in parallel with aggregation, whereas I'm used to hearing about an ES architecture that has events generated from successful aggregation (which goes along with the fact that I'm also not used to the notion of collecting events that haven't been incorporated into the domain architecture). In particular, what do you do if a transaction to the aggregate repository fails? Doesn't that create inconsistency with the event store?
This is a really neat project, but I had a question about your example architecture diagram. Why do you have both Datomic and an ES? I'm just getting familiar with Datomic, but one of the things that first attracted me to it was that -- through it's ability to time-travel -- you get a sort of ES for free (i.e. ES while still getting to think in aggregates). Why did you decide to break the two apart and use Datomic as just an aggregate database? I guess it has to do with the granularity of the transactions you want to put in Datomic that are at a higher level than an event?