Open marcusolsson opened 8 years ago
Hey, I watched your Golang UK talk on youtube, it was great š
Validation is an interesting topic... I'm running into some issues applying ddd concepts to my own little Go DDD experiment and validation is one of them.
I'm curious if you have any thoughts about the whole "always-valid" domain entity debate. Personally I'm in the always valid camp, however the more I try putting this into practice in Go, the more resistance I seem to run into.
I initially had code very similar to what you have here, where domain entities are allowed to be in some invalid state and you have to manually run Validate()
on them in order to verify validity. This directly related to issue #12 where the domain model is directly exposed to presentation logic (json
tags) because of the lack of DTOs. Adding validation tags to the domain model will further exacerbate this problem since if we do that, the domain model will also be exposed to validation logic that belongs in the application service layer.
If we forsake the use of DTOs and unmarshal json directly into domain models, then we've already instantiated a domain object that fails to enforce its own invariants.... which I think is very bad, but Go seems to encourage this. On the other hand, introducing DTOs will lead to a lot of code duplication and copying of DTO fields to and from domain objects making the code very brittle and (for large objects) slow.
It's an interesting discussion...
Thank you for watching my talk, I'm glad you liked it!
I would also put myself in the same camp as you. Ideally, domain objects should not be able to exist in a state where their invariants are violated. There's nothing in Go, to the best of my knowledge, that can completely prevent you from putting your domain object in an inconsistent state*.
While you could go to great lengths to achieve this - building small packages with interfaces for your domain objects to limit access to internal state - it would most likely not be very idiomatic. For goddd, I've tried to be pragmatic rather than clinging to DDD ideals that wouldn't make sense, i.e. when in Rome, do as the Romans do.
Adding validation tags to the domain model will further exacerbate this problem since if we do that, the domain model will also be exposed to validation logic that belongs in the application service layer.
Not sure I understand what you mean here. What validation logic are you referring to?
I'm leaning more towards the Validate(...)
approach rather than validation tags mainly because tags provide relatively limited ways of validating your objects, e.g. no "inter-field" constraints. Validate(...)
would also not depend on any third-party packages. At the moment, it feels like the most idiomatic solution.
It's indeed an interesting discussion, and I'm still not sure what the optimal solution would be. If you have any suggestions, I'd love to hear them out.
* Really looking forward seeing how dependently typed languages like Idris could let you catch violated invariants at compile-time.
While you could go to great lengths to achieve this - building small packages with interfaces for your domain objects to limit access to internal state - it would most likely not be very idiomatic. For goddd, I've tried to be pragmatic rather than clinging to DDD ideals that wouldn't make sense, i.e. when in Rome, do as the Romans do.
Itās a very good point. Itās always a struggle striking the right balance between pragmatism and idealism.
Not sure I understand what you mean here. What validation logic are you referring to?
What I mean is checking that some data complies with some structural rules (i.e - min, max string length, utf-8 encoding, alphanumeric onlyā¦ all of the things which the go-playground/validator
package seem to do) is not a domain concern thus should be enforced at a lower level of abstraction. Whether an Itinerary Legās LoadTime is represented as an int64 timestamp or an ISO8061 string should not matter to the domain, but it does matter to the application logic interpreting and converting it into a domain value object.
Once again this is from an idealistās perspective, but I think that in addition to domain validation, there needs to be validation logic embedded in other layers also. Checking that incoming data is syntactically valid should be an application service concern and should be executed before enforcing invariants at the domain level. As another example, where would you validate a collection query with pagination parameters? It's most likely not an application service concern and most definitely not a domain concern. So you'd probably need presentation level validation to take care of that since pagination is really just another way of presenting domain data.
As for domain validation, the only problem I see with extracting all behavioural validation into a Validate(c Cargo)
is that you donāt capture the context in which you are performing the validation. It may be valid to assign a Cargo
to a route, but it might not make sense to specify a new route for that same Cargo
-perhaps it is already in transit, or has already been delivered. It would make more sense to validate each action on a case by case basis, maybe using decorators or the specification pattern to keep it DRY. Validate(c Cargo)
would make sense if your aim is to perform syntactic validation on Cargo
, but I donāt think it would work well for semantic validation.
Idris looks fascinating, thanks for the link.
Checking that incoming data is syntactically valid should be an application service concern and should be executed before enforcing invariants at the domain level.
Excellent observation that I don't think is communicated clearly enough in the current state. Currently some syntactical validation is currently done in the application services, but there's few examples of domain validation.
I think you might be right about Validate(c Cargo)
. I'm thinking that this issue could be closed with some additional documentation or even a blog post to highlight the points you're making.
While you could go to great lengths to achieve this - building small packages with interfaces for your domain objects to limit access to internal state - it would most likely not be very idiomatic. For goddd, I've tried to be pragmatic rather than clinging to DDD ideals that wouldn't make sense, i.e. when in Rome, do as the Romans do.
Using interface for entities and vo's isn't necessary. Using getters and setters, you can achieve fairly robust object encapsulation for your domain objects, without straying too far from idiomatic go. A simple VO might look like this:
type Name struct {
name string
}
func NewName(s string) (Name, error) {
if s == "" {
return Name{s}, errors.New("name cannot be blank")
}
return Name{s}, nil
}
func (n Name) String() string {
return n.name
}
Just for clarity, I'm not endorsing using interfaces for this purpose. I think using accessor methods is a good compromise that prevents other packages from mucking up your invariants. Still, it won't protect you from doing that from within your package. One thing I like about it though is that it at least communicates immutability. I'd be open to review a PR if anyone's interested in making the change.
This is a very interesting topic. Currently I'm working on a set of microservices using the DDD approach and I've been dealing with some of the troubles commented here. My services are gRPC based, so the transport layer is a protobuf auto-generated code that I treat like a DTO, and I'm using MongoDB as storage.
I went by the non-pragmatic way when started the project, now I have a transport layer with gRPC DTOs, my domain objects with all fields unexported, with exported constructors and getters/idiomatic setters with business logic which is really good. But the really really painful part: a lot of DTOs with exported fields and bson tags to be able to store all this information to the database.
The Domain layer is pretty well, but I have a lot of transformations between gRPC and MongoDB DTOs and is really really painful, tbh I think it's better to export all fields on the domain objects, add the needed tags to them and use the Validation method you are using. Much more pragmatic approach. The only problem is that when someone is developing around your domain objects should be warned to use the exported methods instead of accessing fields directly and run Validation manually.
But 100% sure it's much better this than having 400 lines of DTOs and transformers I think. The transport layer DTOs are fine, but every time I'm writing a repository implementation I feel stupid, really, and I don't know how to manage this with unexported fields.
@hectorgimenez We used to have a lot of internal debate about this very topic, so I feel your pain. In our case, we decided to use unexported fields and fully encapsulate the domain model in its own package (infrastructure, app, and ui code in other packages), and accept the fact that we'd have to explicitly write code to map from our domain model to our persistence model, and to view models. If your models (domain, persistence, view) are closely aligned, the mappings shouldn't be too painful. In our case, however, our domain model and persistence model don't align well, and the mapping code is significant. Tbh this is still a point of contention for us.
Our domain model is complex, so having a correctly encapsulated model is worth the extra trouble. If your microservice is mainly CRUD, and you have few/no domain invariants to enforce, I would definitely skip DDD, and design your app as database centric.
@eriklott thanks, at least I'm not alone on this š Our domain is also complex, and we are triggering domain events on some actions that are dispatched and stored also on database (we are using protobuf for the events in order to deal with the serialization).
The main problem is not the time investment on writing the code to map the domain model to our persistence model and transport one. For me the main problem is that each transformation is a big point of failure (forgot to add a field or wrongly mapped...), we have to test the transformations (or at least trying to cover in an acceptance test that pass through all the layers). And I'm not totally sure if adding this extra complexity worth in any way.
But the most noticeable thing for me is that I feel fighting against the language, seems that I'm forcing it to do things that the language is not designed for, or at least is not giving me facilities. We have like 20 services and are working fine and pretty fast, but everyone on the team feels the same, and seems that the Go community is focused on other kind of things, it's difficult to see good software enterprise architectures on Golang.
@hectorgimenez, 100%. This pretty much summarizes our experience as well.
The current domain validation is a bit lacking. Let's discuss some alternatives in this issue:
The simplest way would be to go all stdlib and just extract the validation into its own function:
func Validate(c Cargo) error
It could be interesting to take a closer look at
validator
: https://github.com/go-playground/validator