#import bevy_pbr::mesh_functions
fn foo() {
let bar = mesh_functions::get_model_matrix(...);
^^^^^^^^^^^^^^^^
}
The imported namespaces themselves, e.g.:
#import bevy_pbr::mesh_functions
fn foo() {
let bar = mesh_functions::get_model_matrix(...);
^^^^^^^^^^^^^^
}
Fully-qualified identifiers that are defined but not imported, e.g.:
fn foo() {
let bar = bevy_pbr::morph::layer_count();
^^^^^^^^^^^^^^^ ^^^^^^^^^^^
}
Import types that will require additional work
I'm currently having a hard time using Naga's APIs to find the resolved/inferred types of arbitrary identifiers, which would enable things like this field accessor to link back to the field declaration:
#import bevy_pbr::forward_io::Vertex
fn foo(in: Vertex) {
let alias = in;
let bar = alias.position;
^^^^^^^^
}
It's likely that the same naga_oil changes that would enable diagnostics to work would also enable this type of thing to start working, since the transpiled WGSL code would have explicit type annotations on all local variable declarations, making it easy to discover the type of alias by just traversing the syntax tree.
Update 3: Now that identifier tokens are mapped to their declarations after parsing, it should be relatively straightforward to "follow the breadcrumb trails" to resolve the types of identifiers accessed with a . postfix, and then lookup the declarations of those types. Still this will require some substantial additional effort which is beyond the scope of this PR for now.
Housekeeping
server::utils::find_decl is probably the fugliest abstraction I've ever written — it's awkward to use and hard to read and reason about. Some high-level refactoring might be a good idea — maybe a PostUpdate system to find the declarations for all identifier tokens and store them in a HashMap or something after a file has been parsed and validated?
Update: There is now a PostUpdate system that runs through every identifier token of every parsed WGSL file to find its declaration location. If found, it adds:
the found location to a map of tokens -> declaration locations
the token's location to a map of declaration tokens -> reference locations
Once this is done, feature providers like hover, go-to-definition, and find-all-references can just look up the result in the corresponding hashmap.
However, the current implementation as written is actually slower than the previous solution, because it's now doing this every single update. Updates occur whenever a notification or request is received from the client. Since every WGSL document read involves a separate notification, the system re-processes every new and existing document every time a new document is discovered and parsed, which happens for every WGSL document in the workspace on startup.
One option for optimization would be to wait until all pending documents are read and parsed before running this system for the first time. However, we would still end up re-processing every document in the workspace every time any document is updated (i.e. while the user is typing), which is less than ideal.
A more robust optimization (but trickier to implement) would be to wait until the first time we receive a request for one of the features dependent on this processing, and thereafter only re-process documents which have changed or whose dependencies have changed (the latter bit being the tricky part).
Update 2: The optimizations mentioned above are now done, and as a bonus, each document should now have Dependencies and Dependents components (both wrappers around EntityHashSets), which will likely come in handy for other purposes in the future.
Features that are currently working
Features that likely need upstream changes (e.g. in
naga_oil
)Types of imports that currently resolve to the correct declarations
Directly imported symbols, e.g.:
Symbols from imported namespaces, e.g.:
The imported namespaces themselves, e.g.:
Fully-qualified identifiers that are defined but not imported, e.g.:
Import types that will require additional work
I'm currently having a hard time using Naga's APIs to find the resolved/inferred types of arbitrary identifiers, which would enable things like this field accessor to link back to the field declaration:
It's likely that the samenaga_oil
changes that would enable diagnostics to work would also enable this type of thing to start working, since the transpiled WGSL code would have explicit type annotations on all local variable declarations, making it easy to discover the type ofalias
by just traversing the syntax tree.Update 3: Now that identifier tokens are mapped to their declarations after parsing, it should be relatively straightforward to "follow the breadcrumb trails" to resolve the types of identifiers accessed with a
.
postfix, and then lookup the declarations of those types. Still this will require some substantial additional effort which is beyond the scope of this PR for now.Housekeeping
server::utils::find_decl
is probably the fugliest abstraction I've ever written — it's awkward to use and hard to read and reason about. Some high-level refactoring might be a good idea — maybe aPostUpdate
system to find the declarations for all identifier tokens and store them in a HashMap or something after a file has been parsed and validated?Update: There is now a
PostUpdate
system that runs through every identifier token of every parsed WGSL file to find its declaration location. If found, it adds:Once this is done, feature providers like hover, go-to-definition, and find-all-references can just look up the result in the corresponding hashmap.
However, the current implementation as written is actually slower than the previous solution, because it's now doing this every single update. Updates occur whenever a notification or request is received from the client. Since every WGSL document read involves a separate notification, the system re-processes every new and existing document every time a new document is discovered and parsed, which happens for every WGSL document in the workspace on startup.One option for optimization would be to wait until all pending documents are read and parsed before running this system for the first time. However, we would still end up re-processing every document in the workspace every time any document is updated (i.e. while the user is typing), which is less than ideal.A more robust optimization (but trickier to implement) would be to wait until the first time we receive a request for one of the features dependent on this processing, and thereafter only re-process documents which have changed or whose dependencies have changed (the latter bit being the tricky part).Update 2: The optimizations mentioned above are now done, and as a bonus, each document should now have
Dependencies
andDependents
components (both wrappers aroundEntityHashSet
s), which will likely come in handy for other purposes in the future.