holinov / zio-event-sourcing

Purely functional concurent and scalable persistance layer implementing CQRS
38 stars 3 forks source link

Wrong order of operations in append. #9

Open heksesang opened 4 years ago

heksesang commented 4 years ago

In def append(evt: E): Task[Aggregate[E, S]], the code seem to first persist events and then reduce the aggregate after: https://github.com/holinov/zio-event-sourcing/blob/master/core/src/main/scala/zio/es/EventJournal.scala#L21

This means you could end up persisting invalid events into the storage before confirming that they can be actually reduced.

holinov commented 4 years ago

I do it by intent. Idea is that aggregate could be restored to actual state reading event log. In my case If i persit data and smth happens i have 2 options 1 - data is not written. i got failure and do not reduce state (thus not damaging buisness-data and not starting any effects) 2 - data is writen and i have error in aggregation function. even if it fails (saying it's not bug in aggregation logic but within infrastructure) next time i read evelt log i restore to consistent state but if i switch to "effect-first" - i easy get to situation when aggregation was reduced (and launched long-running effects) but no data was added to datastore

heksesang commented 4 years ago

As it is, if you call persist for events which are not valid according to the reducer, it will first persist the event to the event journal, then it will try to modify the aggregate, and then update the ref with the new aggregate if it was a success. But at this point you have stored the events in the journal, even though they were not valid.

However, if you first had reduced the event it would fail and no event would be persisted.

What you are doing now is persist -> reduce -> update ref, but if you switch it around you would have reduce -> persist -> update ref. The latter would mean no incorrectly persisted events.

holinov commented 4 years ago

i see your idea but filtering must happen on read side. all incoming data should be saved. but when reducing you could skip any "unwanted" data you want. in EventSourcing you could have more than one aggregate build using same event log and different aggregates could count different events as "unwanted". method you speaking about was made to have a little optimization in most-used case when there is only one aggregate type and aggregate is kept for a long time (i'm not quite sure if i need to keep it or not - cause you just made a mistake i was thinking about when created this method).

heksesang commented 4 years ago

But if the event can't be reduced, then you can never reduce that aggregate as you will always read an invalid event after you have stored it? If I have an event Created that is not supposed to be possible to reduce twice for the same aggregate, but is stored twice in the journal, how do you solve this?

holinov commented 4 years ago

But if the event can't be reduced - don't get this. I could always filter-out unneeded events. But if i have this events in event log i'm able to build some analytics aggregations on this "corrupted" data