Closed FabienPapet closed 1 year ago
Injecting the whole dependency makes it harder to unit test the service, and increases the coupling between the dependency and the dependent service (meaning: a change in the dependency will break the service).
Let me put it this way: would you think that injecting the whole service container is a good idea? I suppose not. Then how injecting the context class is significantly different than that?
Injecting the whole dependency makes it harder to unit test the service
Not at all, you can have interfaces, mocks and so on. The fact is we have god like objects doing everything. Those services need to be splitted and removed in the future. Do you really prefer injecting legacy core classes instead ? Does it make code more testable in your opinion ?
would you think that injecting the whole service container is a good idea? Then how injecting the context class is significantly different than that?
No it's not, the AdapterContext
object is a god object that should not even exist. We need to have more scoped services so we can inject them.
@FabienPapet
so, instead of @=service("prestashop.adapter.legacy.context").getContext().country.id
you'd like to have a service that provides information about the country only,
from
@prestashop.adapter.legacy.context
to:
@prestashop.context.country
(or something like that)
is that correct?
I found some old discussion: https://github.com/PrestaShop/PrestaShop/pull/28716#issuecomment-1151299845
@kpodemski it depends,
The @prestashop.adapter.legacy.context
here is a bad example, because it can make think of the context object used everywhere in PrestaShop. Also @prestashop.context.country
is in this case a model, and should not be a service.
I'll take another example.
- '@=service("prestashop.adapter.legacy.configuration").get("PS_CURRENCY_DEFAULT")'
CurrencyConfiguration
that can be injected instead.example:
class CurrencyConfiguration
{
private $configuration;
public function __construct(Configuration $configuration)
{
$this->configuration = $configuration;
}
public function getDefaultCurrencyId(): int
{
return $this->configuration->get('PS_CURRENCY_DEFAULT');
}
}
After this, we can refactor the service to use the CurrencyConfiguration
instead of the scalar id
value.
- '@core.currency_configuration'
I agree with @eternoendless arguments, but I also agree with @FabienPapet :smile: Still Im leaning more towards @FabienPapet proposal. I think if we have smaller, better designed services there wouldn't be a problem injecting them and calling when needed, instead of calling methods directly in service definition during container build. But if we JUST replace current definitions, we will end up depending too much on legacy context and other legacy classes instead of scalar values, so if/while/before doing this refacto we would also need to structure those "better designed" services (like Fabien provided example of CurrencyConfiguration) and inject them instead.
@eternoendless
Injecting the whole dependency makes it harder to unit test the service
I agree with @FabienPapet, this is not a problem since you can use interface, mock, double and so on.
and increases the coupling between the dependency and the dependent service (meaning: a change in the dependency will break the service).
Currently the coupling is already existing not between "dependency and dependent service" but between dependency and DI which is not a better thing. Services are interdependent, it's normal, DI should not.
Let me put it this way: would you think that injecting the whole service container is a good idea? I suppose not. Then how injecting the context class is significantly different than that?
I don't think this is a legit analogy. When you inject a scalar, either it comes from the code and it's fine to inject it directly (for exemple using %param%) or it comes from somewhere which includes the possibility to use infrastructure to fetch it. Here it's definitely the case, we need to fetch the value from db (it could be from file or wherever you want). This said, it's normal to inject a service instead of a value which include the necessary (infrastructure code) to fetch the value.
Moreover and better, if we use interface we could definitely use polymorphism and switch from a source to another with a single line.
Injecting the whole dependency makes it harder to unit test the service
I agree with @FabienPapet, this is not a problem since you can use interface, mock, double and so on.
You're assuming it's easy to hide the whole legacy code behind nice abstractions. This is a mistake others have already made which gave us a significant number of bad abstractions and code duplication (the Adapter
namespace), because every time you want to refactor one thing you end up having to refactor the whole dependency chain, increasing the project's complexity while providing close to zero added value to the project. Please read: this article and ADR-14.
My conclusion is that for a project like this, it's better to refactor than to create temporary abstractions. Temporary code tends to remain for a veeeeeery long time.
and increases the coupling between the dependency and the dependent service (meaning: a change in the dependency will break the service).
Currently the coupling is already existing not between "dependency and dependent service" but between dependency and DI which is not a better thing. Services are interdependent, it's normal, DI should not.
Someone needs to "connect the dots" eventually. I agree that having calls in DI is not the right solution and abstractions are needed, but delegating more responsibility to services by injecting the whole component is an even worse alternative.
Let me put it this way: would you think that injecting the whole service container is a good idea? I suppose not. Then how injecting the context class is significantly different than that?
I don't think this is a legit analogy. When you inject a scalar, either it comes from the code and it's fine to inject it directly (for exemple using %param%) or it comes from somewhere which includes the possibility to use infrastructure to fetch it. Here it's definitely the case, we need to fetch the value from db (it could be from file or wherever you want). This said, it's normal to inject a service instead of a value which include the necessary (infrastructure code) to fetch the value.
According to your explanation, injecting a Repository in a service class would be okay. But it's not. You don't inject the guy who knows the address of the guy who knows the answer, so that the service can ask them a question. You inject the answer. From my point of view, the application layer (either the controller or CQRS CommandHandler) is the one that puts the pieces together. Services should be basically unaware that other services exist.
@FabienPapet's CurrencyConfiguration sounds nice but it might add a big amount of boilerplate because there are lots of configurations out there.
Please read: this article and ADR-14.
I personnaly think ADR 14 should be reverted for many reasons that I'll not explain here as it is out of the topic. The adapter namespace is fine and we should continue to use it to create bridges to the ObjectModel. I also already read your articles, no need to link them again.
... while providing close to zero added value to the project.
That's not true. Relying on legacy code will make the code harder to test, to read, understand and to maintain because it is not tested either. Even worse, with the ADR14 you are allowing mixing legacy code and new migrated code, creating the hell you don't want. With SOLID principes you can easily mock your dependencies, and add unit tests.
You're assuming it's easy to hide the whole legacy code behind nice abstractions.
It is easy. More than ever because we have all the use cases based on a 10yo software. At least from most parts of the software.
My conclusion is that for a project like this, it's better to refactor than to create temporary abstractions.
This debate is not over and it is your POV without giving any arguments. More, this is not (and has never been) about creating temporary abstractions. It is about create well-designed components from the ground that we can use all together.
According to your explanation, injecting a Repository in a service class would be okay. But it's not. You don't inject the guy who knows the address of the guy who knows the answer, so that the service can ask them a question. You inject the answer
This is again your POV without giving any arguments. Basically what you are saying is just not how dependency inversion works, nor Symfony.
I see you are talking a lot about complexity. The complexity does not come from SOLID usages and Symfony DI but instead
'@=service("prestashop.adapter.legacy.configuration").getInt("PS_SHOP_DEFAULT")'
Out of topic but one absolute rule when you migrate a software is that you can call new code from the legacy, and NEVER call legacy code from the new one. When you are creating new code using legacy functions/features, you are basically creating a migration that will never end. I can also give you some links about how Sylius works, how it is (well) architectured, and it how it can inspire us here and here.
Services should be basically unaware that other services exist.
You can't even imagine how much I disagree here, again, your POV, authoritaive argument, etc..
Hello there,
I have read both sides of the discussion. First I would like to share my take about the two main positions:
@FabienPapet has been working on PrestaShop for a few months now and has identified areas where the code we are writing today is going to prove problematic in the future. For example '@=service("prestashop.adapter.legacy.context").getContext().country.id'
I agree with him (and @lartist π ) that this something not reliable and not future-proof. This is fragile, prone to error, not developer-friendly.
By fragile what I mean is that it is highly probable that it breaks if we refactor any piece of related code. It's not robust. @mflasquin has already worked on a bug where the problem was from this service definition.
So IMO this practice has now been proven wrong. We have to find a better way.
I m also quite convinced by
That's not true. Relying on legacy code will make the code harder to test, to read, understand and to maintain because it is not tested either.
This has been proven also by our work on the migration: many times the new code being written had to fix issues introduced by the legacy code being used at a lower level.
On the other side, Pablo has been working on the project for many years now and I was by his side. Together we have seen and initiated many refactoring attempts to improve PrestaShop codebase.
Many times we could not achieve what we wanted. After attempting to refactor a piece of code A we realized to do it well we had to also modify piece of code B and C and D and also in the meantime the changed code introduced 4 bugs that needed bugfixes. So the refactoring was actually 3x bigger than expected then 5x then 10x and at some point it was so expensive to do the whole thing that the added value was not worth the cost (cost = developer time).
The responsible for this situation is the complexity of PrestaShop: it is massive, everything is coupled together and it is fragile. As I said before fragile means for me: modifying a tiny bit is likely to introduce plenty issues in other areas of the code not well known. It's hard to refactor anything without breaking an unknown number of behaviors. Obviously the fact that the testing code coverage of this legacy part is very low is not helping.
This is why Pablo is very careful with global changes like this one and challenges them. Because he knows what I know: nothing is easy with PrestaShop.
Fabien you say
It is easy.
Unless you have been hiding from us that you were secretly working on PrestaShop for many years π then you cannot say that. You cannot say something is easy or hard before trying to do it. You have to trust us on this one : we've been there before you and we've been fighting with PrestaShop system for years so we've been battle-trained.
If I could 'give' you my experience on PrestaShop to allow you to have the full context and help you understand why we're so careful I would do it but Elon Musk has not finished his Neuralink so I cannot upload you my memories yet. So you just have to trust us.
Given my experience and understanding of PrestaShop, I don't think you can say something related to deep refactoring is easy. Or you're a genius π or your definition of 'easy' is not the same as mine.
it is your POV without giving any arguments
I could tell you about many refactoring attempts that we tried and did not end because of the complexity of PrestaShop but it would take hours and... would you believe us? π Please just trust us on this and don't discard Pablo's advice : it's coming from battle-trained experience.
Don't worry, it's not because we faced obstacles before that we're going to say "it's useless guys just abandon all hope" π but it explains why we're careful before launching and maybe wasting days of developer time. What I have learnt with PrestaShop is that you cannot change big systems like you do small systems: the same practices do not apply. It's like you don't manage a database the same whether you have low or high traffic. For high traffic you introduce sharding, replication, caching while low traffic does not require that.
For me the goal of this discussion is to find the right replacement for '@=service("prestashop.adapter.legacy.context").getContext().country.id'
.
A replacement that
My wish for this discussion is to find a reliable way to write the new code so that the code we will write tomorrow will be more robust, more reliable and more long-term that the code we wrote yesterday.
There are some complicated parts in this projects I agree with you @matks (did someone said CartRules and product management π ?). I also count on your experience and all of the others @eternoendless, @jolelievre @kpodemski @atomiix to help me move this project into the next level in terms of code quality and maintenance. (and also all @PrestaShop/prestashop-core-developers). The problem I had with these arguments is that there are no given examples and they were only opinion sided. I could work on many big Symfony projects as well (even ecommerce), and none of these projects where using that "style" of service definition.
I think we do have the same definition on what easy is. PrestaShop is a big project, we all agree on this. There are hundreds of features, and thousands of LOC. What I meant by "easy" is that of course there are some complicated parts (products, orders, modules), but many times, things are simple and we made them complicated ourselves.
Yes we are good at creating useless abstractions and test them without any added value if you want an example just look at the PrestaShop\PrestaShop\Core\Domain\Customer\ValueObject
namespace and its useless classes named:
BirthDay
Firstname
Lastname
The customer is only 1 MySQL table, when did we need 10+ files to handle Customer management ? Want another example ?
\PrestaShop\PrestaShop\Core\Domain\Currency\ValueObject\CurrencyIdInterface
that could have just be an integer.Yes it is simple to create modern code to fetch langs, shops, currencies... etc.. We all have to think simple. There are some simple parts of the software that can easily be de-coupled at least for read operations, like shop, lang, currencies, customers, employees, etc. It is easy to start decoupling them from the objectmodel part.
everything is coupled together
And that's why we need abstractions and stop adding coupling. You already know I've presented something in that direction, and the response i've been given was that it adds complexity. Complexity comes from coupling with the legacy, not from abstracts.
We cannot just continue to copy and paste code just because "we've always done that" and adding more and more coupling. That's why I'm against almost all legacy code added in the project and I think horizontal migration is a waste of resources. This will make removing code impossible.
I took the LegacyContext
as an example, but there are so many others... Migration have started since at least 4 years ago, and I still don't understand while in 2022 we don't have a new AdminContext
class to fetch language, and we still rely on the LegacyContext. This should have been the first migrated part, and the same for critical classes like shop, language, etc. We still use the legacy context. Where is the PR to create a new Context that we can use for (and after) the migration ?
I know this conversation is out of the initial topic of this ADR, but we'll need to have it at some point.
If I go back to your example @matks
'@=service("prestashop.adapter.legacy.context").getContext().country.id'
There is already this shortcut that exists and is better.
'@=service("prestashop.adapter.legacy.context").getCountryId()'
But i'd rather inject the service instead, and make the call when I need it. As proposed in this ADR. Maybe a solution can be creating a new AdminContext and inject it the core namespace, and it's implementation inside an adapter. But this implementation would use only new objects, not legacy.
First of all, I would like to thank you @FabienPapet for raising important subjects. I have the feeling that it's going to be a long discussion :-) but, ok, let's do that, this project needs it.
Inconvenient: None as this is a continuous work and this does not require any work.
It's tough to agree with that. Don't you think introducing ADR like that would change the scope of every new pull request in the project, drastically increasing the time to market?
Even tho I understand the idea, I don't see any plan. How would you approach working with the project after accepting this ADR? Who would be responsible for creating these smaller services? Do you want to have a working group making those regularly? Do you think people submitting new PRs create them? If you want to keep adding services and using them in the new code, at some point, we have to refacto old fragments of the software. How would you tackle this?
I agree with Pablo on this one:
This is a mistake others have already made which gave us a significant number of bad abstractions and code duplication (the Adapter namespace), because every time you want to refactor one thing you end up having to refactor the whole dependency chain, increasing the project's complexity while providing close to zero added value to the project.
Again. I understand the concept, and I like the idea of having more, smaller services, I can see how useful it could be, and I can see how it could ease some stuff. I see that majority of people there also like the idea.
The thing is that it's just the idea, now let's discuss the plan with the manpower that the project has right now, with the current release cycle, and with things that the business wants to see in the software.
See again what @matks wrote:
Many times we could not achieve what we wanted. After attempting to refactor a piece of code A we realized to do it well we had to also modify piece of code B and C and D and also in the meantime the changed code introduced 4 bugs that needed bugfixes. So the refactoring was actually 3x bigger than expected then 5x then 10x and at some point it was so expensive to do the whole thing that the added value was not worth the cost (cost = developer time).
Do you think you could devise a plan that wouldn't be the next endless loop of refactoring attempts? :-)
First of all, I'm sorry if what I say or have said comes out as harsh or arrogant, or just _argumentum ad verecundiam_. My replies are often cut short because explaining all the details in writing takes a very long time (which I usually don't have much of), both from me and well as the reader. Hell, it took me over a year to write my "2019 and beyond" series. But I'll try anyway.
I've been in your shoes. I too have found myself pulling out my hair and thinking "oh my god this is SO obviously wrong I can't possibly understand why it wasn't done differently, it would be SO EASY to do it right and still they did it wrong!". But after having spent some time on the project, I realized that things were actually much, much hard than they look.
The first thing to understand is that this project is substantially different from typical projects from private companies β and it's probably very different from any other project that you've worked on before. Most developers work with applications, developed in controlled environments, led by highly-aligned teams working in high-frequency development cycles. That is the opposite of PrestaShop.
Contrary to typical software projects, PrestaShop is an end-user-application/development framework hybrid, it is run in very diverse environments that we don't control, and is developed by a heterogeneous, loosely aligned community where everyone is following their own agenda. This all requires fairly long development cycles because contrary to SaaS apps, we don't push into production: people decide to adopt the software. That changes everything.
When you change something in a typical proprietary SaaS software, you know all the boundaries beforehand. You can easily account for all the impacted teams and services, and discuss with them to make sure you account for everything. After that, you simply perform the change, deploy it, and that's basically it. Worst case, you need to think of a deployment strategy to update your data, and perhaps even write some documentation. It's easy.
When you make any significant change in PrestaShop, you need to take a lot of things into account. How it will impact developers developing it, customizing it, and extending it? Will they have to learn a new technique? How will they become aware of it? Will it be easier or more delightful to use compared to what they know now? Will there be a transition, and if so, how long will you keep the old system in place? How much will it cost for people to upgrade? Make a bad decision, and people will abandon your platform.
And that's only the beginning. PrestaShop is massive, there's lots of technical debt and complexity hidden everywhere. If you don't limit your refactoring boundaries well, you start off thinking you'll refactor a single class and end up spending three months refactoring 30+ services to fit your change and/or your new rules and going back and forth with QA who will find regressions that you wouldn't have imagined you'd encounter.
Another big deal here is that PrestaShop needs to be stable, extensible, and customizable at the same time. From an engineering standpoint it means that we need to find the right balance between simplicity, stability, and flexibility. Stability requires the system to be strict about what you do with it. But strictness is the enemy of flexibility, unless you create a complex system of small components. However, as you go that way, you get away from simplicity. People say Magento is too complicated, but still is the solution of choice for many big companies. Those people tend to choose PrestaShop because it strikes a better balance. So we need to be very careful about where we want to be and how to get there, or we can lose our community on the way.
The problem that led to ADR-14 is that the Adapter-based architecture failed its purpose. It was meant to be a facade exposing good interfaces, but it end up becoming a collection of proxies to legacy components. To expose nice interfaces you had to either re-engineer the whole system in the Adapter namespace, or isolate each component behind dedicated "data providers". However, this proved to be extremely time-consuming, making it impossible to deliver anything without spending as much as 10x of your dev time refactoring building bridges to already existing features. So developers had to cut corners.
In SaaS software, you can refactor as many times as you need because you don't care about backward compatibility and community adoption. Your transition can be made in a couple months. In PrestaShop, we spent 6 years without a major version and avoiding BCs as much as possible. Now we can accept BCs more often, but we need to do it wisely because of the cost for our community. Transitions take years and need to be well thought, carefully planned, and well-documented.
During my years working with PrestaShop, I realized a couple things:
Because of that, my conclusion is that:
1) Temporary code should be avoided as much as possible, and when not possible, marked as internal and well-documented. Any intermediate layer, any "temporary" component that you add will likely stay there for years until it can be transitioned out. A system with many different components and rules is more difficult to understand, so module developers or system integrators will have a harder time understanding it, and will likely use it in ways you didn't intend, changing your "temporary code that you meant to delete soon" a dependency for them, making it even more difficult to transition it out.
2) Strive for coherency. Avoid having more than one component that does the same thing, or else people won't know which one to use, and will likely pick the wrong one. On top of that, you'll have to maintain both and make sure the validations and business rules are enforced exactly the same way on both.
3) If you add too many abstraction layers, complexity will skyrocket.
Yes we are good at creating useless abstractions and test them without any added value if you want an example just look at the
PrestaShop\PrestaShop\Core\Domain\Customer\ValueObject
namespace\PrestaShop\PrestaShop\Core\Domain\Currency\ValueObject\CurrencyIdInterface
that could have just be an integer.
I disagree entirely. Value objects (also known as Value types) are widely regarded as good practices. Have a look at this article and this other article to learn why.
Yes it is simple to create modern code to fetch langs, shops, currencies... etc.. We all have to think simple.
Simple is good. But it's usually not stable. Fetching the code from database with SQL queries is simple. Using Doctrine ORM is simple. Simple AND stable is not easy. That's why we build services and abstractions.
There are some simple parts of the software that can easily be de-coupled at least for read operations
I think that Repositories are meant to strike a good balance. It's still low-level though.
It is easy to start decoupling them from the objectmodel part.
Everything depends on ObjectModel, if you go that way you'll have to refactor everywhere to ensure consistency. But I hear options!
Complexity comes from coupling with the legacy, not from abstracts.
Have you had a look at presenters in FO?
We cannot just continue to copy and paste code just because "we've always done that"
No that's definitely not my point. My point is being realistic about what we can deliver, ensuring the architecture doesn't become chaoti, and concentrating our efforts where there's maximum ROI.
I still don't understand while in 2022 we don't have a new AdminContext class to fetch language, and we still rely on the LegacyContext.
Because it's used everywhere and we haven't had the time to look into it. But I hear proposals!
But i'd rather inject the service instead, and make the call when I need it.
Yes because it's easier, but coupling services together is a bad idea, and is arguably against the interface segregation principle ("no code should be forced to depend on methods it does not use"). For example if you inject the whole Configuration, you component depends on the state of Configuration. To know which configurations need to be set, you need to reverse-engineer your component to find out which configurations it uses. Worse, if your component sends the whole configuration further down into third party service calls, you can't possibly know what other configurations those services depend on. That's shared state and is the source of lots of headaches.
Again. I understand the concept, and I like the idea of having more, smaller services, I can see how useful it could be, and I can see how it could ease some stuff. I see that majority of people there also like the idea.
The thing is that it's just the idea, now let's discuss the plan with the manpower that the project has right now, with the current release cycle, and with things that the business wants to see in the software.
I fully agree with Krystian here. There needs to be plan, and we need to agree that it's important to do that now.
I acknowledge that having calls in service configuration is not good. We just need to find the right solution to it. The way I see it, it's probably that we're either configuring services with data that should be passed along as a parameter in runtime instead, or that we're missing services providing that precise information (which could be injected).
There is a real need to find a solution for this , because of this :
I had a discussion about how to inject configurations with @jolelievre a few days ago. Perhaps we could use a special annotation for that, like @parameters
works. Perhaps if we allowed environment values to override configurations, we could create a special env variable processor that falls back to configurations in database. Or perhaps there is another solution, but I'd really like to avoid injecting the whole configuration into services.
This is not only about configuration, can we all agree that we should get rid of this annotation everywhere and move this forward ?
There are easy solutions to create scoped configuration, like CurrencyConfiguration, DimensionsConfiguration, ShopConfiguration ?
There are easy solutions to create scoped configuration, like CurrencyConfiguration, DimensionsConfiguration, ShopConfiguration ?
Sounds about right
Soooo ? where is thing going ?
I've been giving this some thought for a while. I wasn't able to find a definitive answer yet, but here are some thoughts.
For the cases where configuration is injected, we could add a compiler pass imitating the %env()%
annotation, or even better, extend this one to fall back to configuration when the env variable isn't set.
There are cases where runtime variables are injected into services, like shop id or user id. I think these cases should be analyzed individually to evaluate the most appropriate approach according to the specifics of each service:
MultistoreFeature
because it allows reading the state of multistore, but also enabling/disabling it.This is going nowhere, and it's been 9 months already. And as nobody wants to comment/vote on this I'll close this issue.
Better spend my time where things actually move.
Context
Today we have many different ways to make dependency injection into Prestashop. The purpose of this ADR is to unify all the ways for easier code understanding and maintenance.
There are many problems when we call DI. The main problem is that logic is is the YAML files, and we can't track function calls in the container, which hardens refactoring when we try to find usages. Also, we can call many times the same service to fetch different values. More, calls are made even if we don't use the result.
This also is a DX problem because autowiring can't be used and it goes against ADR 19.
Proposition
The proposition is to stop using the
@=service
notation and injecting the used service instead. We however must be careful that we inject scoped services to avoid breaking the single responsibility principle. As well, legacy services should be avoided if possible and if there is a new service doing the same thing.Injecting legacy services with legacy classes must also be avoided.
Advantages
Inconvenient
TLDR
Moving from this :
So it can be transformed into:
so the code will easier to understand and to maintain.