dart-lang / language

Design of the Dart language
Other
2.6k stars 200 forks source link

Consider moving _macros/macros into _fe_analyzer_shared for breaking changes #3904

Open davidmorgan opened 1 week ago

davidmorgan commented 1 week ago

We have a bunch of stuff to break--see milestone--and it's hard enough already without also worrying about breaking rolls / analyzer users.

WDYT? @jakemac53 @scheglov

davidmorgan commented 1 week ago

The downside is that it would break the JSON macro--at least I don't immediately see a nice way to keep it working. We could do a not-nice way like the "vendored package:macro" approach we had temporarily.

jakemac53 commented 1 week ago

I don't think we should do that, it was very hard to have any kind of reasonable working setup with macros previously...

davidmorgan commented 1 week ago

Hmmm we could continue to publish on pub for macros / experiments.

But breaking the dep from the analyzer for now would be a big help.

So: analyzer -> copy in _fe_analyzer_shared other things less critical -> package on pub

Pretty much the same setup I'm suggesting for dart_model for now :)

jakemac53 commented 1 week ago

I think we should just push on solving the underlying problem here that requires the SDK tools and users macros to be on identical versions - the serialization format and protocol not being backwards compatible.

jakemac53 commented 1 week ago

We could also consider pulling the packages out to a separate repo, so that we can iterate on the APIs and then pull them into the SDK in one go, instead of having to churn the analyzer/CFE/macros/_macros versions for every little change?

dave26199 commented 1 week ago

I realized that we are going to want a dependency from package:macros onto package:data_model very soon, and that's going to make all this even harder: we would also have to pin analyzer and data_model versions until data_model is stable.

I think pretty much: we need to consider package:macro, package:_macro and package:data_model as a single unit, and work out a way to allow them all to churn a lot for a month or two while we get to a new stable point.

We can certainly pull _macros and macros over to the new macros repo as part of that, it seems logical, and gives us more control. I don't know if that's enough though, I still think "vendoring" inside _fe_analyzer_shared until stable again looks attractive.

I am a bit worried about the recent analyzer optimization work adding even tighter coupling to package:_macros, it's good that we figure out how to optimize but the coupling is something we need to reduce, not increase :) possibly we have pushed enough on that for now.

jakemac53 commented 1 week ago

I don't know if that's enough though, I still think "vendoring" inside _fe_analyzer_shared until stable again looks attractive.

Can you explain more exactly what makes it attractive? I don't quite understand what problem it fixes, and it creates a bunch of new problems (makes macro hacking extremely fragile/finicky). There are a lot of people out there writing macros right now, I see new ones cropping up all the time.

davidmorgan commented 1 week ago

It's possible I misunderstood something.

Where I think we're headed is that there will be dependencies analyzer -> dart_model and analyzer -> macros -> dart_model. The analyzer -> macros -> dart_model looks to me like it will force dart_model to be published with tight versions like macros is.

I think this will create a lot of extra work. I think we want dart_model to be in v0 for a while, where we can make breaking changes multiple times per week, and having to go through deprecate -> publish -> update -> delete -> publish is going to make that much slower.

Re: problems hacking on macros, I think that is inevitable. The version people are using is not stable, the code they are writing is not against the final API. It is very expensive to support an unstable API.

I think we will benefit a lot from resetting expectations here one way or another. For example we could add a section to the package:json docs with "known good" ranges for Flutter main that people should use, and set the expectation that it will simply be broken outside those ranges.

davidmorgan commented 1 week ago

It occurs me that there is another path we could take, analogous to what I did in the exploration code with the dart_model_analyzer_service package.

Instead of the analyzer/CFE directly interacting with dart_model+macros as a dep:

flowchart LR
analyzer --> dart_model/macros
CFE --> dart_model/macros

We break the dependency by introducing additional packages:

flowchart LR
analyzer_macros --> analyzer
analyzer_macros --> dart_model/macros
cfe_macros --> dart_model/macros
cfe_macros --> CFE

Then the analyzer and CFE need to expose hooks/internals to integrate with macros, but they don't need to contain most of the actual code.

To actually ship we do need to pull these libraries into the binaries, but we might be able to avoid creating circular dependencies with the packages:

flowchart LR
ab[Analyzer binary] --> analyzer_macros
ab[Analyzer binary] --> analyzer
analyzer_macros --> analyzer
analyzer_macros --> dart_model/macros
cfe_macros --> dart_model/macros
cfe_macros --> CFE
cb[CFE binary] --> cfe_macros
cb[CFE binary] --> CFE

This would then be a fully decoupled design where analyzer, CFE and macros are all in control of their own API surface.

I'm not sure that this approach is easily compatible with passing dart_model through the existing macros API, which is what I was planning to do as a way to get some first end to end experiments running and drive the design+implementation.

But maybe it's a workable alternative: it does make it possible to experiment with integrations with analyzer+CFE, and they can be low risk, any breakage would happen in a repo we don't even have to publish. We would however have to add additional hooks into analyzer+CFE to make progress.

@scheglov @johnniwinther for their thoughts :)

jakemac53 commented 1 week ago

I like the idea of that decoupled design a lot 👍

scheglov commented 1 week ago

It looks to me that the clients of the analyzer as a library will not able to analyze Dart code that contains macro applications, because they don't go through "analyzer binary".

bwilkerson commented 4 hours ago

I agree with Konstantin's assessment. This proposal doesn't appear to meet the needs of the analyzer package. Please let us know if we're missing something here.

davidmorgan commented 4 hours ago

@bwilkerson we discussed in a meeting but I didn't update the issue, sorry about that.

This is mostly about what we do while the macros code is unlaunched; to launch the analyzer package with the code we would indeed need to make it a dep or copy/move it into a dep. It can still be conceptually decoupled, as proposed, in the sense that we only refer to it minimally from package:analyzer in order to inject it.

But at a higher level there wasn't a huge amount of excitement for the idea--except agreement that we need whatever works to let us iterate on the experimental parts of the macro code without breaking what we have. I expect to come up with some concrete proposals, i.e. code people can look at :) soon and to move the discussion forward that way.

Thanks :)

scheglov commented 17 minutes ago

This is mostly about what we do while the macros code is unlaunched; to launch the analyzer package with the code we would indeed need to make it a dep or copy/move it into a dep.

If instead of keeping _macros in the SDK we move it outside while the feature is unlaunched, this will make iterating over it harder. Instead of a single atomic commit into SDK we will have to have a dance with publishing, updating dependencies.

It can still be conceptually decoupled, as proposed, in the sense that we only refer to it minimally from package:analyzer in order to inject it.

I have doubts about this. There will be some code that is specific to both the analyzer and the macros, and this code will depend more than minimally on the implementation details.

jakemac53 commented 10 minutes ago

If instead of keeping _macros in the SDK we move it outside while the feature is unlaunched, this will make iterating over it harder. Instead of a single atomic commit into SDK we will have to have a dance with publishing, updating dependencies.

I think it would help because we can iterate quickly outside the SDK without having to constantly publish the macros package for every little change. Most changes are irrelevant to the analyzer/CFE anyways.

Then, when we merge into the SDK we sometimes need to do some atomic changes to the analyzer/CFE. There is no additional publishing dance (for now, _macros would still be an SDK dependency).

jakemac53 commented 9 minutes ago

It would also allow publishing of macros prior to merging of it into the SDK, so there isn't a lag time between the changes landing in the SDK and a version of macros being available, which sometimes can be a day or more and can break CI workflows that run on main (as well as flutter rolls).