Closed unixslayer closed 11 months ago
Event Sourcing Aggregate which was called should still be able to record its own events using #[AggregateEvents]. Those events should also be persisted with aggregate being called.
This is possible with impure aggregate using internal recorder. Basically you can switch between two ways of returning events.
@unixslayer in case of Pure ES Aggregate let's strive for handling that too. So we can keep aggregate free of event based state and still be able to use this functionality.
Maybe in case of Pure ES Aggregate we should allow to return events
/ aggregates instances
?
We can collect list of possible aggregate instance during configuration building phase. Then before saving we could redirect events to Event Store and Aggregates to Repositories.
Wdyt?
The biggest challenge here will be to handle different results returned from aggregates. My first approach was to use an internal recorder. However, after doing some additional thinking I came up with the idea of returning a result that could be interpreted by Ecotone.
returning a result that could be interpreted by Ecotone.
If I understand correctly that would require using framework specific classes inside the model. And one of the Ecotone's tenets is to keep your business code clean from framework specific classes and interfaces. This makes the huge difference as there is no other framework that provide tooling for building models, yet avoid coupling the code with the framework.
However, we can recognize what are the aggregates, as aggregates are marked with Aggregate
attribute and can be found using AnnotationFinder.
The rest of the things which are not aggregates have to be events then :)
If I understand correctly that would require using framework specific classes inside the model
@dgafka not necessarily. I was actually thinking about using attributes to tell Ecotone what is what. Consider following example:
<?php
#[\Ecotone\Modelling\Attribute\EventSourcingAggregate]
class EventSourcingAggregate
{
public function doStuff(SomeCommand $command): SomeResult
{
// check business stuff
$result = new SomeResult();
$result->aggregate = new StateBasedAggregate();
$result->events = [new StateBasedAggregateCreated()];
return $result;
}
}
#[\Ecotone\Modelling\Attribute\Aggregate]
class StateBasedAggregate {}
#[\Ecotone\Modelling\Attribute\Result]
class SomeResult
{
#[\Ecotone\Modelling\Attribute\ResultAggregate]
public StateBasedAggregate $aggregate;
#[\Ecotone\Modelling\Attribute\ResultEvents]
public array $events;
}
Called aggregate is a part of a flow and I'm afraid using already existing attributes may cause confusion not only in domain model due #[Aggregate]
and [#EventSourcingAggregate]
are part of configuration setup. However, when a result is returned which is specified by proper attributes Ecotone would know what to do with it in the context of aggregate being called.
We are not extending domain model in any way other that is already required - by using attributes.
With that example, Ecotone will know what to do depending on what was returned. The current behavior will be maintained so model can return collection of events for ES aggregates and so on, but in more complex scenarios, I can imagine using more complex result object which still describes the model and let FW know how to persist state changes.
Ok, I see. So, some domain specific result object, that will be understandable by Ecotone. Sounds cool to me :)
@jlabedo as far as I remember you were mentioning this feature. Wdyt?
Hey, very interesting topic ! In fact I mentioned this feature on discord a few months ago.
I have some hesitations to promote the design of saving multiple aggregates in the same transaction. It's ok to have one aggregate responsible of the creation of another, and in my use case it was ok if only the returned aggregate is persisted. I we want to have side effects on multiple aggregates, shouldn't we promote using domain events ?
aggregate responsible of the creation of another, and in my use case it was ok if only the returned aggregate is persisted.
I think we still can make this explicit, by allowing to return Result object only in case, there are multiple things to be persisted. And type hint Aggregate as return type directly, when we just want to return an new Aggregate.
I have some hesitations to promote the design of saving multiple aggregates in the same transaction
@unixslayer can you share the scenario, where you would like to apply modifying called aggregate and creating new one?
@dgafka @jlabedo please, consider this.
The system allows issuing a certificate with the following assumptions:
From the above description, the design may look as follows:
Certificate
and CertificateCycle
CertificateCycle
is identified by a Standard
and holds the logic behind issuing a new Certificate
and the consequences of such an operationCertificateCycle
will expose the public method issueCertificate
which will return a new instance of Certificate
- if possible based on the current state of CertificateCycle
CertificateCycle
should contain references to all issued Certificates
with enough amount of data required to handle the operation it provides (not all data from Certificate
has to be kept)Certificate
will expose public API that is relevant only to its instanceCertificate
will contain a reference to CertificateCycle
from which it originatesThe relation between CertificateCycle
-> Certificate
seems to be AggregateRoot
-> Entity
. However, Entity
can be accessed only via the API provided by AggregateRoot
, and the API that Certificate
exposes itself is big enough to make CertificateCycle
too big. It is also redundant having to load a huge AggregateRoot
only to call a single method of one of its Entity
which is relevant only to that Entity
. Not to mention that we have more relations like that one in the system, I'm only describing one of them. That is why we've decided to keep our Aggregates
as small as possible.
Also, Ecotone
does not support "internal" Entities
.
Our current implementation works as follows:
CertificateCycle
handles IssueCertificate
command and if successful, records CertificateIssued
eventCertificate
has a factory method as EventHandler
triggered by CertificateIssued
event and returns its own set of eventsImagine how much data is being unnecessarily copied here. Also, the complexity of two Aggregates
"talking" with each other is not as convenient as we would like it to be.
The ideal would be for CertificateCycle
to return a new instance of Certificate
however, at the same time there is a need for CertificateCycle
to save reference to newly created Certificate
.
I used a publication by Vaughn Vernon and his example of creating one aggregate by another. However, in that example when Product
creates a new instance of BacklogItem
it only publishes an Event
.
Also, Ecotone does not support "internal" Entities.
Really ? Why not ?
Imagine how much data is being unnecessarily copied here. Also, the complexity of two Aggregates "talking" with each other is not as convenient as we would like it to be.
Are you talking about data copied into memory ? Because I don't see the difference at the db level.
The ideal would be for CertificateCycle to return a new instance of Certificate however, at the same time there is a need for CertificateCycle to save reference to newly created Certificate.
Thinking of it again, there is no point in not persisting both aggregates. The builder aggregate IS the transaction boundary, so why not persisting it ? Moreover, I guess it even SHOULD be persisted by incrementing its version number, even in the case where there is no mutation of its internal state (for optimistic locking) ?
So looks good to me for state based aggregates.
Regarding event sourced aggregates, it still feels strange to me at the moment.
Thanks @unixslayer for detailed answer and reference links :)
So looks good to me for state based aggregates.
In that case, I think we all agree that there are cases where modifying existing aggregate plus storing new aggregate instance, has it's place :)
Regarding event sourced aggregates, it still feels strange to me at the moment.
In case of ES Aggregate with Internal Recorder, we will first record events internally and then return new aggregate instance. So we don't even need Result Object, is that correct @unixslayer ?
In case of Pure ES Aggregate, the things get bit complicated, as we already want to return an array of events and now we want to add an Aggregate to that. So that would require some Result Object as Piotr suggested, where we define what is what :)
@jlabedo I had in mind a solution similar to how Broadway handles child entities.
It is possible to have "internal" Entities" and we are implementing them as long those Entities are small and recreating them from Events does not require a lot of code that we can omit by separating smaller aggregates.
@jlabedo I had in mind a solution similar to how Broadway handles child entities.
It is possible to have "internal" Entities" and we are implementing them as long those Entities are small and recreating them from Events does not require a lot of code that we can omit by separating smaller aggregates.
Ha ok, you meant ecotone does not support child event sourced entities ? I agree with that then.
We may have a business case that requires to have two aggregates where one is in charge of creating a second one.
Currently, Ecotone does not provide that possibility.
Following behavior is required:
#[AggregateEvents]
#[AggregateEvents]
. Those events should also be persisted with aggregate being called.The last scenario should be considered as valid as long as both Aggregates (ES and State-based) are persisted in the same transaction.