Open robertgr991 opened 1 year ago
Hello, @robertgr991 π!
First of all, thank you very much for opening this issue and contributing to this project.
The truth is that I agree with you. I had been wanting to implement Vertical Slice Architecture in this project for some time, but due to lack of time, I never took the step. I believe that, together with DDD principles and a Hexagonal/Clean Architecture, you can create projects that scale very well and, most importantly, that are maintainable over time.
Regarding the structure you propose, what do you think if we adapt it this way? π€
.
βββ apps
βΒ Β βββ console
βΒ Β βββ graphql
βΒ Β βββ grpc
βΒ Β βββ rest
βββ modules
βΒ Β βββ module1
βΒ Β βΒ Β βββ application
βΒ Β βΒ Β βββ domain
βΒ Β βΒ Β βββ infrastructure
βΒ Β βββ module2
βΒ Β βββ application
βΒ Β βββ domain
βΒ Β βββ infrastructure
βββ shared
βββ application
βββ domain
βββ infrastructure
This type of division, paying attention to the Screaming Architecture, could indicate the following:
modules
directory contains the different modules/features of the Bounded Context that is represented by this software artifact. I opted for modules since it is more linked to the concepts of Domain-Driven Design.shared
folder represents the Shared Kernel. In this case, all the utilities of the different layers can become common between the different modules.apps
folder reflects the presentation layer, in which the business logic and use cases of the application can be exposed using different technologies.I would like to hear your opinion about this proposal. In this way we can reach a structure that is as appropriate as possible and that can fit a large majority of developers who have the same concerns as us.
Thanks again π
Hello!
Thank you for the detailed response!
This step is an important one and it's better that you didn't proceed with it in a hurry.
I agree with your proposal, here are my thoughts:
modules
aligns more with the concepts of DDD.apps/rest
, for example?As a side note, this changes will steer the template a little bit away from it's initial objective of proposing a template for a REST API.
Hello again, @robertgr991 !
The intention of the template is not to create a monolith or a monorepo in which to contain several Bounded Contexts. I prefer simplicity and to keep contexts physically separated (in different repositories). So this template will only deal with the use cases of a specific Bounded Context. What do you think?
We confirm modules
as a directory to contain the different features.
We confirm shared
as the directory to contain the Shared Kernel.
Regarding the presentation layer, in my opinion, I would manage everything in the different subdirectories of the apps
folder, one for each technology used (rest
, graphql
, grpc
, etc.). In this case, a REST API
is being implemented. I reason it in the following way:
I think that the applications that will represent the primary adapters should be managed independently. Why? In my opinion, use cases can be consumed in many different ways and using different types of technologies. For example, rest
, grpc
, graphql
, console cli app
, etc. If we add one more directory to each module to contain the presentation layer, we would be polluting and losing focus on the functionality itself. In the end, it's about the different clients that are going to consume the use cases of the different modules. Why does it make sense for secondary adapters to be in each module's infrastructure directory? In my opinion, despite being infrastructure, they are necessary for the application to work. For example, we need data to be persisted, emails to be sent, messages to be obtained from messaging brokers, etc.
On the other hand, imagine that we want to eliminate an app
, for example, the REST API
. If we delete the apps/rest
folder we would have already eliminated that presentation system. If we had this same thing distributed across all the modules we would have to go module by module eliminating each folder related to the REST API
.
Another point is that, if you want to externalize and take this layer to another repository, it could be done without too much effort. Imagine the case in which we encapsulate the core of our application, that is, the modules and shared directories, in a library. That library could be used by any app
with any technology. In short, applications act as mere clients of our domain. Always through use cases.
The last point is that, if we try to apply the Screaming Architecture
to this topic, it is easier to understand what the entry points of our application are if we have a directory that contains all of these implementations. If the presentation layer were distributed across every one of the modules, we would have to dig into the code to find out how we can consume our use cases.
This is an example that does something similar to what I propose.
I'm thinking that maybe it would be more convenient to rename the apps
directory presentation
, what do you think?
This is my humble opinion. As I said in the previous comment, I am open to any other proposal.
Thank you again.
I think we should continue with the use case of implementing one Bounded Context as this is also easier to adopt. The means of communicating between Bounded Context can be different depending of the context and it's more difficult to create a boilerplate for it.
Your arguments for keeping the Presentation logic decoupled from each module are good and I agree, in this case, as presentation layer is the least coupled with the other layers and the flexibility of removing/moving a specific system with relative ease, speaks for itself. This will also set the rule that it's not the responsibility of the module to present itself, so to speak. Keeping the presentation
layer inside each module
may also contradict Screaming Architecture because technology-related terms would be used for those filenames.
As I recall, DDD doesn't have specific terminology for the Presentation Layer and different architectures use different terms, but, to be fair, I think presentation
is more concise and more frequently used in literature. apps
may also sound similar to application
layer to someone new to the architecture.
I think, so far, this clarifies the direction in which the architecture should point to, unless others join the discussion.
PS: What do you think about Architectural Decision Records? Should a boilerplate have an opinion on this matter?
Hello again, @robertgr991 .
Thank you very much for your reasoning and for giving your opinion regarding these issues. My opinion is the same as yours. As a summary, here are the basics we have established after adopting the use of Vertical Slice Architecture
.
This template/repository is intended to contain and represent a single Bounded Context.
The modules
directory will contain the different modules of the Bounded Context.
The shared
directory will contain the Shared Kernel.
The presentation
directory will contain the presentation logic for each specific technology: rest
, graphql
, grpc
, etc. (You are right when you say that DDD
does not specify a specific terminology for this layer. It depends on the architecture or terminology being used).
As a result, the target structure is as follows:
.
βββ modules
βΒ Β βββ module1
βΒ Β βΒ Β βββ application
βΒ Β βΒ Β βββ domain
βΒ Β βΒ Β βββ infrastructure
βΒ Β βββ module2
βΒ Β βββ application
βΒ Β βββ domain
βΒ Β βββ infrastructure
βββ presentation
βΒ Β βββ console
βΒ Β βββ graphql
βΒ Β βββ grpc
βΒ Β βββ rest
βββ shared
βββ application
βββ domain
βββ infrastructure
On the other hand, linking to the question that you asked in your previous comment, I would like to comment on the following:
Regarding the Architectural Decision Records
, I think it is a good idea to contemplate them in the scope of this template. Not to establish concrete and very forced decisions, but as a guide so that if a person uses this template he/she knows how to create these definitions and conventions through these records. That is to say, to create an Architectural Decision Records template using the decisions made for this project as an example.
Concerning the Use Cases or Application Services, do you consider that Ports/Interfaces should be defined for each one of them? In my opinion, use cases should only orchestrate the business logic and are not subject to varying implementations. I have read many kinds of opinions regarding this topic, but I would like to know yours as well.
Recently I was thinking about adding both Domain Events
and Integration Events
management to the template. Documenting myself in various ways, reviewing business projects in which I have participated, etc. there is a topic on which I do not find an absolute consensus. It is about the Integration Events
. In which layer would you say it would be more convenient to handle them, in the application
layer or in the presentation
layer acting as another entry point to the application?
PS: Sorry for opening so many topics, but this way we can discuss interesting questions and give our opinion to improve this template.
Thank you very much for your time π.
Hello!
application
layer and, in my view, should also be handled inside the application
layer because it's a way to communicate between our internal applications. The presentation
can also be consumed by various external clients or even customers. for example, Webhooks.It's good to discuss these various topics as it sparks different ideas and also acts as a learning/recalling exercise.
Hello again!
Perfect, I think it is a good approach to use Lightweight ADR. In this repository there is a real example.
Ok, I agree with what you say. I think it is a good idea to include Interfaces for the Use Cases and thus not depend on a specific implementation. Even if the implementations are not going to vary much since the business logic orchestration is not something very variable, the Interfaces can be useful for other topics like, for example, integration, performance or load testing.
I agree that Integration Events should be published and consumed at the application
layer. In the end, this type of event belong to the core of the domain and business logic, so it would not make sense to attach it to the infrastructure layer to finally execute an action in the domain. If I asked you the question is because in many projects I have seen developers add these consumers or event handlers in the presentation layer (for example, in the presentation/kafka
folder). Through those handlers, they call the use cases and execute a specific action. In a way, it could fit me, since in the end it is an input/primary adapter that acts as a dispatcher for the business logic. But if we look at those integration events
from the point of view that they belong to the execution of a distributed business logic it doesn't make sense to capture them in the presentation layer.
About the last thing you mention in this point: "The presentation can also be consumed by various external clients or even customers. for example, Webhooks." Could you explain it more to understand it better? Thanks π!
PS: As new topics arise, for example, applying Vertical Slice Architecture
, creating Use Case Interfaces, creating Architectural Decision Records, etc. I will create different issues to implement these functionalities or improvements.
Thanks again!
I should have worded it better. I was trying to say that in presentation
layer it is expected to reside the logic of communication with the "outside world" while Integration Events is an internal communication. But I see your point. I've seen this implemented in different ways, in presentation
layer, in infrastructure
layer of the Shared Kernel. I think this is more of an infrastructure
problem than a presentation
one, but I don't know if this should reside in the shared
layer. What do you think about this?
In the end, this is more of a personal preference, as both presentation
and shared/infrastructure
will only act as a glue for the actual Use Cases.
Hello again,
I've been giving this matter some thought. My opinion regarding where the Integration Events should be captured is as follows:
An Integration Event
is an important part of a whole. Specifically, it is the mechanism that allows different Bounded Contexts to communicate with each other to finalize a larger business logic that is often distributed. That said, the management of Integration Events is done at the application
layer. By management, I mean deciding what action to execute in the consuming system due to the event. The concrete implementation of the consumption of those events, using one technology or another (Kafka
, RabbitMQ
, etc.), should be the responsibility of the infrastructure layer. Thanks to Dependency Inversion
, the event handler would be modelled in the application layer using an interface but implemented in the infrastructure layer. Here are a couple of examples:
Regarding what we were saying that in some cases there were developers who handled those events in the presentation or infrastructure layer, I think that would not be entirely correct. If it were done in those layers, the objective of having the architecture or the folder structure itself self-explaining and reflecting the intention of the system would be lost. Also, the infrastructure and presentation layers are the most volatile. That is, they are subject to constant change, whereas the application and domain layers should be stable over time.
Another topic is that we may want to add event-based mechanisms as a presentation layer to execute complete use cases. In this case, after capturing an event in the presentation layer, the complete necessary business logic would be executed. But we would not be executing business logic that belongs to a use case that was initiated in another Bounded Context and is part of a whole. In short, it would be another way of exposing the application to the world. For example, we would have a REST API that by making a POST call /api/v1/users
would create a user and, on the other hand, we could have a Kafka topic, to which the presentation layer would be subscribed and receive events that reflect commands to create users. Maybe the difference is hard to see with respect to an Integration Event
, but for me they are different scenarios.
Now, another topic occurs to me that is quite curious. And it is how you would implement Webhooks
in an application that follows this type of architecture we are dealing with. For you, are Webhooks part of the presentation layer or should it be treated as something in the domain? That is, if you had to call an endpoint registered by a subscriber (webhook url/callback
) every time a certain event is triggered in the system, how would you model it? I can think of two main options:
presentation/webhook
and there subscribe to the events and make the POST
calls to the different webhook callbacks.module/Bounded Context
of the system and treat it as something related to the business logic of the application. So, I would subscribe to the events in the application layer and based on them I would make the call to the different webhook callbacks through specific use cases for it.I would like to know your opinion on this π
Feature Request Checklist
main
branch of the repository.Overview
This is more of a discussion than a feature request.
What do you think about organizing the features using the architecture that is commonly known as "Vertical Slice architecture"?
The current structure is:
The proposed structure would organize every logic of a feature in a feature folder:
I think this will improve cohesion and reduce coupling by setting clear features boundaries. Each boundary can specify exactly what should be shared with other parts of the system. Having each logic of a feature in a feature root folder will also help when traversing the code and learning about what the feature has implemented.
References
Additional Info
No response