Closed niclash closed 3 years ago
GutFeeling:tm: Analysis; I already have an approach that is close to CQRS. Messages are commands arriving on Kafka and different View models updates their respective tables which are each optimized for different queries. That would suggest that the Read Models should not contain the Event Source stream, which currently kind of sits in Kafka.
Conclusion; Drop the timestamps from clustering in these Read View models and revisit the history aspect later. "Copy to another table" might also not be the correct answer, but a matter of replaying selections of events from event store.
Fixed, without dropping created
timestamps. But adding a deleted
column, and manage that explicitly.
In the beginning, I wanted to have the full history of all changes right in the Cassandra row, but along the way I changed my mind, without updating the cluster key. So, now queries will get multiple results for each update.
Choice 1; Go down the route of having an EventSourcing approach to record keeping.
Choice 2; Just update records "normally" and add a "copy to another table" later, if full history is to be kept.