Open ematipico opened 5 months ago
Thanks for this outline!
In general I agree with the two-pass approach. I think indeed there should be one pass that collects data we need and one that does analysis with the data available over all files. As you mentioned, between the passes there may indeed be a step that detects and defines dependencies as well; the creation of the Project data structure.
I saw one part that made me cautious:
- try to compute data only when requested;
- don't overdo it, try to compute only the data requested;
- think about queries, where a consumer asks for data, and the provider delivers them (and ideally caches them);
I am somewhat afraid this might be approaching the direction from the wrong angle. Rather than querying from the place where the data is needed and caching results, I think it’s better to build (most of) the data from the ground during the first phase so that it’s already available when the second phase starts. This does mean we end up computing some information we end up not querying, but I see some convincing reasons we may want to take this approach anyway:
And finally, if the goal is to avoid doing unnecessary work, I think it’s best to split the types of data we gather into various services, and only activate the services that have one or more rules registered for them. But if we make data gathering itself too fine-grained, I think we’ll make things too complex for ourselves and I’m skeptical the work we safe would outweigh the benefits of better parallelization.
Unrelated question: Since the data structure is called Project for now? Do you think this provides a good opportunity to improve our monorepo support as well?
So I think a little bit more concretely I would suggest something in the following direction:
This is very much like today, we load the config files. Since we decide here which rules should be enabled, we’ll know which services to activate for the next step.
This step performs the filesystem traversal and parses all the supported files. It also extracts file-specific data for enabled services.
Examples of services could be:
package.json
, deno.json
)package.json
, deno.json
)tsconfig.json
, deno.json
)I included project structure in this list because I think if we do this right, it might help us to fix our monorepo support. This probably means we’ll treat package.json
as a JSON file in the project like any other, but we’ll have a service that recognizes it as a project root, extracts its dependencies, etc.. It won’t care yet if it’s a nested project, since that’ll be up to the next step to resolve.
At this point we look at everything we’ve gathered and build a dependency tree. We create a one or more Project
s a and define the relations between them.
This step is probably hard to parallelize, since graph manipulation is difficult to multi thread, which is why I think we should have most data available already.
File resolution also happens here, and I think we can be clever here by not doing any additional filesystem I/O. We’ve already done a traversal, so we know the file paths and their contents.
Some services, such as type inference, may want to extract information from external packages. We couldn’t do that during phase 1, since we hadn’t done any file resolution yet. But now that we know and have identified what is missing we can repeat phase 1 for these resources. The main difference is that it won’t be a file system traversal, but traversal of the dependency graph for missing data. It may take file system access, or even network access, to fully resolve everything though.
Now we have well-defined projects, their dependencies, full file resolutions and a bunch of extracted metadata. It’s time to run our rules :)
This can be fully parallelized again since the data in the services is fixed at this point.
One more thing I’d like to add is that I think an approach like this could scale very well to our LSP use case too, since every time a file is changed, we just re-run the phase 1 discovery on it after which we only may need to update the Project
data structure(s) to get back into a fully consistent, up-to-date state again.
There's also one more thing that I would like to consider as part of this refactor/feature. salsa
.
If you don't know salsa
, it is an on-demand computation and memoisation framework.
I have been looking at salsa
for a looooong time, and we've had plans to try to introduce it inside Biome. Ruff is also evaluating it. There's a big question mark, though: at the moment, they are rewriting it, and they will land the 3.0 version. However, the developments are very slow, and we currently don't know when they will finish it. However, I know that Micha from Ruff is currently helping with the developments, the team had a meeting about shipping v3 and more. So I am hopeful.
Their website lists the documentation of the upcoming v3, and the old website is now gone.
Another word of advice: salsa
isn't a library; it is a framework, which means our code architecture may have to change to accommodate it. Some months ago, I also tried for fun to include the old salsa
, and I am certain that things will have to change, however we can incrementally add stuff to the database.
The documentation lacks a bit, especially when it comes to how to use salsa
inside a query compiler or IDE. However, I know that they plan to expand the docs in that territory.
From a CLI perspective, a way to build a Project and its graph is by doing two passes of the file system. One pass to collect all files. A second pass to check the files. The second pass is more challenging because, between the first pass and the second pass, we don't have an entity that will tell the WorkspaceServer which files should be checked.
One thing I am afraid of is unnecessary traversing of directories (including symbolic links). In the past we had some performance issues with that.
We could take a more granular approach, where we load the files in the currently traversed directory and decide - based on .gitignore
, and global ignore
/include
- which subdirectories to traverse.
This has one drawback: a handled file may import a file in an ignored directory. However, even by traversing all the file hierarchy, we still have this problem because a handled file can import an unhandled file outside the traversed hierarchy. We could use on-demand traversal for these cases. I think we could even give up on this, because if the user is explicitly asked to (globally) ignore a file or directory, this is surely for a good reason. The user might be confused by getting parsing errors for an ignored file.
Note that we need to clarify one thing: if a user globally includes directories, we should only process them. However, if a user doesn't use include
and only uses ignore
, then technically we could process files outside the traversed hierarchy. Is that what we want?
Using this approach, will require to slightly precise or change the semantic of our ignore
/include
:
ignore
/include
and .gitignore
control which directories we traverse and which
files we process (parse, extract ypes, dependency graph).ignore
/include
don't prevent traversing of a directory or
processing (parsing, type extarction, ...) of a file.
It only prevents linter/formatter execution.ignore
/include
.
However, I am unsure where the traversal of the file hierrachy should start if
we apply a such semantic. Maybe the current directory where the command is executed?
Or the directory where the biome config file appears (This could be a bad idea if a project has no config file)?Fortunately, a module graph can be built with petgraph, which implements these traits.
It is not clear to me why a module graph is needed for multi-file analysis. We just need a way to resolve imported files and a way to extract data for those files. Of course, the module dependency graph will be really useful for import analysis (cycle detection, ...).
Phase 1: Traversal and Discovery Phase 2: Analysis
If we keep our linter query-based system, most of the current linter rules could be executed in the first phase because they visit the AST. Is there any reason to defer them until after the dependencies have been resolved?
The main difference is that it won’t be a file system traversal, but traversal of the dependency graph for missing data. It may take file system access, or even network access, to fully resolve everything though.
Another word of advice:
salsa
isn't a library; it is a framework, which means our code architecture may have to change to accommodate it. Some months ago, I also tried for fun to include the oldsalsa
, and I am certain that things will have to change, however we can incrementally add stuff to the database.
I think salsa
is a pretty impressive architecture, but I am indeed slightly wary of the amount of changes it would require for it to fit into Biome's architecture.
Another thing to consider: As our plugin efforts progress, we may want to give plugins the ability to extract information from files similar to how Rust services can. This might be possible with salsa
too, but I'm not sure if it will become harder to achieve.
We could use on-demand traversal for these cases. I think we could even give up on this, because if the user is explicitly asked to (globally) ignore a file or directory, this is surely for a good reason. The user might be confused by getting parsing errors for an ignored file.
(emphasis mine)
This sounds dangerous. I think ignore instructions from the user should be respected as far as analysis is concerned, but not necessarily as far as extraction/discovery is concerned. Example: node_modules
is usually ignored, but you'd still want to extract type information from it.
Note that we need to clarify one thing: if a user globally includes directories, we should only process them. However, if a user doesn't use include and only uses ignore, then technically we could process files outside the traversed hierarchy. Is that what we want?
I think indeed processing files outside the traversed hierarchy is what we want, and may even need to extract information from dependencies. I would say this is true even if the user only specified a limited set of includes
. In my suggestion, this is effectively what Phase 1b is about, and which indeed would have the ability to traverse outside files/directories from the initial traversal.
It is not clear to me why a module graph is needed for multi-file analysis. We just need a way to resolve imported files and a way to extract data for those files. Of course, the module dependency graph will be really useful for import analysis (cycle detection, ...).
I think it's more a convenience than a necessity. Indeed as soon as you extract imports and are able to resolve them, you already have the graph implicitly. Putting it in a data structure makes access more convenient. Cycle detection is indeed a good example of that :)
I wouldn't say it necessarily needs to be a petgraph
, btw. As long as we track files in our workspace, a simple list of paths (after resolving) of which other workspace files it depends on might also be enough.
If we keep our linter query-based system, most of the current linter rules could be executed in the first phase because they visit the AST. Is there any reason to defer them until after the dependencies have been resolved?
You're right, they could be. The main reason I figured it might be better to do all analysis in phase 2 was simplicity: If all analysis rules run in the same phase, there might be less bookkeeping involved.
Maybe your suggestion could actually improve performance by giving the workers more to do while we also perform a lot of filesystem I/O, so I wouldn't rule out that option either.
We were discussing this on Discord, but adding here for visibility: I think it could be a good idea to build an initial version without dependency graph and use it to power the GraphQL analyzer. Might be a good PoC for a first iteration.
Update: I've been doing some more exploration on how we can implement some of the stuff described here. This has helped me to realize some of the difficulties that lie ahead of us, so I'll share my insights here:
DependencyGraph
service that could be used by lint rules to get information about resolved paths from import statements, which will unlock capabilities such as cycle detection.DependencyGraph
I'm trying to implement certainly has overlap with the Project
type proposed in this issue. In my mind, the Project
type could maybe be renamed to something like ProjectLayout
, and would contain information about package.json
files (plural!) so that it may help us towards our promised #2228 as well. ProjectLayout
could also contain tsconfig.json
files, to make us aware of TypeScript aliases (among other things). At that point, the DependencyGraph
would likely start to depend on the ProjectLayout
in order to resolve import paths that depend on aliases or packages within the same repository.DependencyGraph
gets constructed between phases. However:
Project
-like data structure inside the workspace server, but it also implies we need to move file traversal and loading as a whole into the workspace server.DependencyGraph
as well. But we don't want the server to need to do a full retraversal on every LSP request. So this implies we need caching as well.
For those curious, my current branch can be explored here, but it's in a bit of a messy state, since I also took some wrong corners before learning some of the lessons I shared here :sweat_smile: https://github.com/arendjr/biome/tree/dependency-graph
Update: I've set apart my ideas a bit more clearly here as well: https://github.com/biomejs/biome/discussions/4653
Preface
We want to implement a feature that allows the various features of Biome to tap into information regarding other files. At the moment, features like lint rules don't have access to information that belongs to other files. Some examples are exported bindings, exported types, types of modules, etc.
An entity we could call
Project
(the name is not set in stone) should provide this kind of information.Design
Following, I will list some design aspects of the feature, which we can discuss. Don't take everything as set in stone, but assimilate it and then debate it. I will try to do my best to outline the current archicture as best as I can.
First, the architecture should follow this principle:
Playing actors
At the moment, we have the current actors at play:
FileSystem
trait. A generic trait meant to crawl the file system and evaluate if certain paths (directories, files, symbolic links, etc.) should be checked. Two types implement this traitOsFileSystem
andMemoryFileSystem
. TheFileSystem
doens't know which files should be handled. The logic that checks if those paths can be handled is implemented elsewhere via another trait calledTraversalContext
. As you can see, we are going elsewhere.OsFileSystem
, however, has an interner for paths, but it isn't related to the paths checked.The
WorkspaceServer
can be seen as our query compiler. The end consumer (e.g. CLI or LSP) has control over the compiler and tells it what to do. TheWorkspaceServer
has some characteristics:ProjectData
is already in use. It knows that a client might have different projects with different settings: https://github.com/biomejs/biome/blob/eafc62e5d8096d56317b2f0c4d8d652c2cf25964/crates/biome_service/src/settings.rs#L52-L57The
FileSystem
and theWorkspaceServer
rarely interact with each other. The only moment when they is for just one operation:fs.open_file
->workspace.open_file
Possible data flow
I envision the
Project
to be part of theWorkspaceServer
. If so,Project
must beSend
,Sync
andRefUnwindSafe
. This means that we need to choose data that meets those criteria.Fortunately, a module graph can be built with
petgraph
, which implements these traits.A
Project
shouldn't duplicate the information that are already present in theWorkspaceServer
, e.g. file content, CST.Where we should strive
Here are some ideas that we should consider while implementing the architecture:
CLI
From a CLI perspective, a way to build a
Project
and its graph is by doing two passes of the file system.One pass to collect all files. A second pass to check the files. After the first pass it could be possible build the
Project
we require.The second pass is more challenging because, between the first pass and the second pass, we don't have an entity that will tell the
WorkspaceServer
which files should be checked. Should we create one? Should we extendFileSystem
to tell us? Let's discuss itLSP
From the LSP perspective, it's easier to initialise a new
Project
, but it will be more challenging to update it. Every time a file is updated/created/deleted, the graph should be updated.References
Here, Jeroen Engels, creator of
elm-review
, explains how he does multi-file analysis inside the linter: https://jfmengels.net/multi-file-analysis/ . However, let's take into consideration that we have different constraints and requirements.Tasks
I haven't talked about dependencies. Dependencies will be part of the graph, but they can be added later one, as we need a manifest to collect them (which we already have).
Project
insidebiome_project
(Feel free to add more tasks if you think they are needed)