Closed yoshuawuyts closed 8 months ago
Oh actually just now realizing there are a collection of plugins available for this:
Heh, that's nice. I wonder if a tighter integration story would be possible / desirable? Maybe there are things on the lang server side that could be picked up?
Rust Analyzer currently only provides a way to run a single unit test.
You can also run a module with tests. But yeah, having something like test explorer above would be nice and is definitely in scope. The biggest catch here is that there's no LSP protocol for this at the moment.
Ok, looks funny, I'll do it.
The VS Code Test UI has stabilized as part of the July 2021 update! There's a guide available here on how to write extensions for this.
Integrating with this would be fantastic!
https://code.visualstudio.com/api/extension-guides/testing
We might want to support that.
The way I understand it, what VS Code wants is a global view of a hierarchy of all the tests in a project. There are two potential sources for such view:
cargo test -- --list
on every change, or use symbol index to list every #[test]
function)I don't think we can support the first case correctly: in the limit, it is a global view of all the tests, and to maintain that we'd have to do O(N)
work on every change worst-case. Displaying a hierarchy of tests also feels like it duplicates other navigation functionality.
So I think we should do the following:
--message-format json
flag, and to populate nodes bellow cargo target with test results. Yep, I am working on this. Now I finished the presentation and api. Today I hope to move on to creating observable test data structure.
https://code.visualstudio.com/api/extension-guides/testing
We might want to support that.
The way I understand it, what VS Code wants is a global view of a hierarchy of all the tests in a project. There are two potential sources for such view:
- statically discovered tests (eg, re-run
cargo test -- --list
on every change, or use symbol index to list every#[test]
function)- external sources -- display a result of a test run
I don't think we can support the first case correctly: in the limit, it is a global view of all the tests, and to maintain that we'd have to do
O(N)
work on every change worst-case. Displaying a hierarchy of tests also feels like it duplicates other navigation functionality.So I think we should do the following:
- on the server side, provide the API to fetch project model, such that the vscode can draw one node per cargo package/target
- on the client side, change our test runners to optionally pass in the
--message-format json
flag, and to populate nodes bellow cargo target with test results.
Also about runnable explorer, seems it is was misstack idea, instead extension may provide task provider. But for project study goals it may be at separate command.
Good news - the lack of this is definitely something that I had push back on when trying to rollout rust analyser out to teams. Can't wait to give this a go.
Good news - the lack of this is definitely something that I had push back on when trying to rollout rust analyser out to teams. Can't wait to give this a go.
I am integrating the salsa database for now. I can push code that partially working maybe someone can help me in zulip chat, I have several stupid question, but I think we can finish it at weekend if someone who can help will be available in zulip
Looking at the code will be helpful -- I am surprised that this needs to touch salsa layer at all.
Looking at the code will be helpful -- I am surprised that this needs to touch salsa layer at all.
If we want to have information for the entire workspace, we need storage. And if I correct understand we use salsa for that purpose. Also for future goals, I think it also need
If we want to have information for the entire workspace
We shouldn't support getting the list of all the tests for the entire workspace. That's an inherently unscalable operation. To recompute this list after O(1)
edit, we'd have to do O(N)
processing regardless of any salsa tricks we do.
VS Code API supports that, but that API doesn't work for big workspaces for languages like Rust, where you can't list the test looking at isolated files.
As per architecture invariant in https://github.com/rust-analyzer/rust-analyzer/blob/master/docs/dev/architecture.md#crateside, we shouldn't look at the VS Code APIs when designing ours. We should do what makes the most sense.
Hey, I closed #5765 as it doesn't look like a viable approach. Here's the roadmap for what I think would be a better way to solve this:
The VS Code side of things should combine these two sources of information and feed them into the test UI. Additionally, VS Code side should be in charge of actually running cargo test, collecting test results, and displaying which tests were run after the fact.
If anyone wants to implement this issue, it's yours! If we get several competing impls, that's fine -- we should be able to combine them for the best ideas.
Now, it might be the case that I am very wrong here, and something along the lines of #5765 is what we need here. I can get convinced by a small and working prototype where we can concretely discuss various algorithmic complexity implications. I won't have time to dive deep in "let's redesign the world" partial diffs.
One of the main points of discrepancy was the fact that Kladov does not want to implement the showing runnables for the entire project. This would allow in the future auto rerun test when making changes at test or related code, keep semantic run history and so on. Instead of it, Kladow proposes to dwell on a solution of the form cargo test --package package-name
Thought I'd chime in as the maintainer of the aforementioned existing extension that leverages the now-legacy Test Explorer plugin model :wave:
Wanted to say that I'd love to see this incorporated natively in RA! That's partly because I think it makes for a better developer experience, partly because I never had the bandwidth to expand the existing extension beyond it's current rudimentary state, and somewhat selfishly :smile:, partly because I'd love to not have to convert that one to the new testing API.
I definitely get the concern around performance, especially in large projects, but also wanted to share that in my experience the Explorer/Hierarchy views of tests aren't automatically updated on changes, at least not by default. The tree is defined up front on the activitation event, and users would then have to manually click the 'reload' button if they'd added/modified tests and wanted the Explorer/Hierarchy view to reflect those changes.
I've not looked at the new Test API too closely, but I wonder if similar behavior might be possible? I do think it's important that users at least have the option of being able to display the full hierarchy of their tests, even if it's non-default behavior they have to configure. Several of the developers I'm working with are coming from the JetBrains/Visual Studio world with other languages and tell me they find that type of Test visualization in VS Code helpful while learning Rust.
Yeah, my thinking here is that after the user runs the test the hierarchy is populated with all the tests executed (the implementation would leverage —message-format=JSON argument to cargo test). The “reload” button then is just the “run” button.
Thought I'd chime in as the maintainer of the aforementioned existing extension that leverages the now-legacy Test Explorer plugin model wave
Wanted to say that I'd love to see this incorporated natively in RA! That's partly because I think it makes for a better developer experience, partly because I never had the bandwidth to expand the existing extension beyond it's current rudimentary state, and somewhat selfishly smile, partly because I'd love to not have to convert that one to the new testing API.
I definitely get the concern around performance, especially in large projects, but also wanted to share that in my experience the Explorer/Hierarchy views of tests aren't automatically updated on changes, at least not by default. The tree is defined up front on the activitation event, and users would then have to manually click the 'reload' button if they'd added/modified tests and wanted the Explorer/Hierarchy view to reflect those changes.
I've not looked at the new Test API too closely, but I wonder if similar behavior might be possible? I do think it's important that users at least have the option of being able to display the full hierarchy of their tests, even if it's non-default behavior they have to configure. Several of the developers I'm working with are coming from the JetBrains/Visual Studio world with other languages and tell me they find that type of Test visualization in VS Code helpful while learning Rust.
By the way, in my implementation, I restored the display functionality in hovers. And it is almost ready, now basically only the transfer protocol remains to be completed? Are you more interested in this?
Are you more interested in this?
I'm not 100% sure if this question was targeted to me, but I'll share my two cents anyway. I'm a big believer in first framing these types of discussions in terms of the capabilities, e.g. the "what" before the "how" as I find the former helps keep the user top of mind.
As both a user of test explorer capabilities across a number of programming languages, and a maintainer of the aforementioned test explorer extension for Rust, IMO it's critical a test explorer for Rust provide users with the ability to:
There's certainly plenty of other features/capabilities that tie into other parts of the editor (e.g. debugging, navigation between source/explorer view, context menu items, etc.), but the three I mentioned above constitute what I view to be the "core" capabilities. My concern is that if the visual/explorer test experience for Rust won't have those features then IMO there'd be a degraded user experience with Rust that sounds like it could differ from what users would be accustomed to in the test explorer with other languages, both in terms of features and behavior.
Are you more interested in this?
I'm not 100% sure if this question was targeted to me, but I'll share my two cents anyway. I'm a big believer in first framing these types of discussions in terms of the capabilities, e.g. the "what" before the "how" as I find the former helps keep the user top of mind.
As both a user of test explorer capabilities across a number of programming languages, and a maintainer of the aforementioned test explorer extension for Rust, IMO it's critical a test explorer for Rust provide users with the ability to:
- (a) View hierarchical representation of individual test cases in the current/open project (note that this does not necessarily need to be automatically updated as the user modifies code. In my experience the test explorer does not automatically update for any language and requires users to manually hit the "reload" button when they want to update this visual representation after adding/removing test code)
- (b) Execute individual test cases and execute "groups" of related tests (lots of ways to slice what a "group" is for Rust, but I think by module, crate, package, target, type, etc. are some options worth considering)
- (c) View most recent results for any test executed via the explorer
There's certainly plenty of other features/capabilities that tie into other parts of the editor (e.g. debugging, navigation between source/explorer view, context menu items, etc.), but the three I mentioned above constitute what I view to be the "core" capabilities. My concern is that if the visual/explorer test experience for Rust won't have those features then IMO there'd be a degraded user experience with Rust that sounds like it could differ from what users would be accustomed to in the test explorer with other languages, both in terms of features and behavior.
Hello, I have rewritten most of the code and now I have returned the current functionality that was. The points: a, c) It's almost ready, you need to do the formation of delta changes and send (the changes themselves and the protocol are already defined). I decided not to request updates, but delta updates while rust analyzer work b) This has not been done yet, but I think it will not take much code.
Interesting note: it seems that cargo test -- --list --format=json
(first of all is unstable and secondly) currently doesn't respect the --format
argument. I was going to take a stab at implementing this solely on the vscode plugin side (and might still), but the lack of a specifically machine-readable format for --list
is an unfortunate factor.
@godcodehunter, what's the status of your implementation?
Interesting note: it seems that
cargo test -- --list --format=json
(first of all is unstable and secondly) currently doesn't respect the--format
argument. I was going to take a stab at implementing this solely on the vscode plugin side (and might still), but the lack of a specifically machine-readable format for--list
is an unfortunate factor.@godcodehunter, what's the status of your implementation?
I'm blocked by this https://github.com/salsa-rs/salsa/pull/273, I wrote to Niko, now i'm waiting for a merge. But also there can be used hack by just copy and diff by hands. Everything except computing diff and sending the changes to the client side is finished.
@godcodehunter Copying and diffing on the LSP layer is IMO not a hack, but likely the right solution. I am actually not sure how you intend to use https://github.com/salsa-rs/salsa/pull/273 for this, but it doesn't seem like the intended use case to me.
Interesting note: it seems that cargo test -- --list --format=json (first of all is unstable)
About that... https://github.com/rust-lang/rust/issues/97242 😅
Anyway, I hacked together a cargo-only testing extension: https://github.com/CAD97/cargo-testing-panel
It's still super rough around the edges and falls apart if you look at it wrong, but it's a proof of concept of what the functionality could look like. Disclaimer: I am not a typescript developer so the everything is super sketchy.
I definitely get the concern around performance, especially in large projects, but also wanted to share that in my experience the Explorer/Hierarchy views of tests aren't automatically updated on changes, at least not by default. The tree is defined up front on the activitation event, and users would then have to manually click the 'reload' button if they'd added/modified tests and wanted the Explorer/Hierarchy view to reflect those changes. [@calebcartwright]
There doesn't appear to be a reload button in the builtin testing window. I suppose a testing/item/context
contribution could be added to make a reload option? (Of course I can just add an intent directly.) Using item.canResolveChildren
might also be interesting; I think r-a already does cargo test --workspace --all-targets --no-run --message-format=json
(since I copied the code for dispatching that, anyway), so the root list of test package/binaries can be populated by that and watching for creation of new targets, then cargo test -q --package=crate --target=target -- --list
run on-demand by canResolveChildren
.
I'll probably evolve my extension to use cargo-nextest if I keep working on it, and let r-a do the std libtest integration.
Or of course a more integrated solution where r-a does the incremental test discovery itself is also possible, of course.
Here's the roadmap for what I think would be a better way to solve this: [@matklad]
- on the server, expose the information about the cargo project structure (which packages and targets are there) (this needs to be a new LSP extension)
- on the server, expose information about available tests in a specific file (this is already available via runnables lsp)
Having worked with the vscode testing API a bit, this is what roughly what I think this could ideally look like (modulo I don't know the LSP naming conventions):
controller.resolveHandler(undefined)
cargo test --workspace --all-targets --no-run
to discover testable targetscontroller.items.add(controller.createTestItem(...))
for each testable targetcontroller.items
as promtedcontroller.resolveHandler(testItem)
runHandler(testRunRequest, cancellationToken)
runTestables(testRunRequest)
updateChildTestables
with discovered updated testable hierarchy provideTestResults
as test results come inThe most notable thing lacking from the vscode testing API is that while it provides "test succeeded," "test failed," and "test errored" (could not be run) results, there's no "test ignored" result. I've currently implemented ignore as just not providing a result. (However, I do provide an explicit result for the root run request to provide the duration, and that means you can "run" an ignored test directly to get a pass. I can patch this by checking for the "0 passed" case in the suite finished event.)
@godcodehunter Copying and diffing on the LSP layer is IMO not a hack, but likely the right solution. I am actually not sure how you intend to use salsa-rs/salsa#273 for this, but it doesn't seem like the intended use case to me.
I already explained why this is a mistake, I'm not going to do it again.
Here's the roadmap for what I think would be a better way to solve this: [@matklad]
- on the server, expose the information about the cargo project structure (which packages and targets are there) (this needs to be a new LSP extension)
- on the server, expose information about available tests in a specific file (this is already available via runnables lsp)
Having worked with the vscode testing API a bit, this is what roughly what I think this could ideally look like (modulo I don't know the LSP naming conventions):
- server:
controller.resolveHandler(undefined)
- server->client: requestRootTestables
- client: (equivalent of)
cargo test --workspace --all-targets --no-run
to discover testable targets- client: watch set of testable targets
- client->server: provideRootTestables
- server:
controller.items.add(controller.createTestItem(...))
for each testable target- ...
- client: sees testable target set change
- client->server: updateRootTestables
- server: adjust
controller.items
as promted- ...
- server:
controller.resolveHandler(testItem)
- server->client: requestChildTestables(testItem)
client: create child testable set for:
- each module in the target root (or selected module), with "I have children" flag
- each test in the target root, with "I'm a root runnable" flag
- client->server: provideChildTestables
- ...
- server:
runHandler(testRunRequest, cancellationToken)
- server->client:
runTestables(testRunRequest)
- client: do the bits necessary to run all (children) tests as requested
- client->server:
updateChildTestables
with discovered updated testable hierarchy- client->server:
provideTestResults
as test results come inThe most notable thing lacking from the vscode testing API is that while it provides "test succeeded," "test failed," and "test errored" (could not be run) results, there's no "test ignored" result. I've currently implemented ignore as just not providing a result. (However, I do provide an explicit result for the root run request to provide the duration, and that means you can "run" an ignored test directly to get a pass. I can patch this by checking for the "0 passed" case in the suite finished event.)
Maybe you are interested in my code? He is not so complex and already doing what you describe https://github.com/godcodehunter/rust-analyzer
What about blocker (https://github.com/salsa-rs/salsa/pull/273) Niko says that he try merge it until the end of next week
What's blocking this?
Can anyone look into this? It seems ready to be merged
https://github.com/salsa-rs/salsa/pull/273 has been closed in favor of https://github.com/salsa-rs/salsa/pull/275. What is the current status of this feature? Is it possible to move forward, or is it abandoned?
No, code pass all data to extension. There exist error that test I'd is number but should be string. If you want, can you can finish. I'm currently in exile and don't have much time for that. P.S. I plan to return to work on the code in a week or two. If you don't want to read the code.
The most notable thing lacking from the vscode testing API is that while it provides "test succeeded," "test failed," and "test errored" (could not be run) results, there's no "test ignored" result
@CAD97 is this distinct from the "Skipped" state -- could you elaborate here?
@connor4312 I believe TestRun.skipped
did not exist when I first investigated, and the API only provided TestRun.passed
, TestRun.failed
, and TestRun.errored
. (I was unable to find any documentation of what VSCode version added the API to confirm this.)
Sorry for multiple comments, didn't have time to read the full thread yesterday.
There doesn't appear to be a reload button in the builtin testing window
There is one if you register a TestController.refreshHandler
: https://insiders.vscode.dev/github.com/microsoft/vscode/blob/9c76d66e695381ac392144bc3ec5c7e21edbb472/src/vscode-dts/vscode.d.ts#L15880-L15891
Is there some way to try the current implementation for this functionality out, however rough around the edges it might be?
Is there some way to try the current implementation for this functionality out, however rough around the edges it might be?
https://github.com/godcodehunter/rust-analyzer/tree/ohh-god . Just run the analyzer with an extension. Implementation correct pass msg to vs code client, but client incorrect handle it (update tree). I stop at this point
Hi guys, I just implement another PR based on the thought of https://github.com/rust-lang/rust-analyzer/issues/3601#issuecomment-945778285 (although it's still kind of different, but now it's client to control the life-cycle of tests.)
There is still a lot of room for improvement, but I think it could be reviewed now to avoid any early design issues.
So, please review or have a try if you are interested in it :) https://github.com/rust-lang/rust-analyzer/pull/14589
My PR actually needs minor fixes (debugging) and test migration. I don't know if I'll find time for it, just because to be honest I'm freelancing right now. I hope that my code with incremental updates will serve as a starting point for someone, including a lot of work done on test management. But you can consider it abandoned, I was very sorry that my work did not arouse interest in the community
Lack of this feature is actually the one thing that make RustRover attractive to work with code having plenty of tests written. Even if they solved running individual tests in the stupid way (creates run configuration for each invocation).
If that's the only thing you lack, you can try https://marketplace.visualstudio.com/items?itemName=swellaby.vscode-rust-test-adapter
seems to work for me albeit I am not sure how to get the test output in this case
Hands down clion actually can run the test whereas the debugging integration in vscode is more of a please wire up some configuration yourself. People want ‘just works’ when they click run or debug on a test.
On Fri, 20 Oct 2023 at 16:27, Kirill Bulatov @.***> wrote:
If that's the only thing you lack, you can try https://marketplace.visualstudio.com/items?itemName=swellaby.vscode-rust-test-adapter
seems to work for me albeit I am not sure how to get the test output in this case [image: image] https://user-images.githubusercontent.com/2690773/276975170-006ea556-cb44-454d-80ff-cf7499d6e87e.png [image: image] https://user-images.githubusercontent.com/2690773/276977289-c03f8eb5-a2fb-4855-af0d-4bfffec8f563.png
— Reply to this email directly, view it on GitHub https://github.com/rust-lang/rust-analyzer/issues/3601#issuecomment-1772954779, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGEJCGXVZH3NTKPWUHAYKTYAKJ6LAVCNFSM4LLOGMMKU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCNZXGI4TKNBXG44Q . You are receiving this because you commented.Message ID: @.***>
I've left https://marketplace.visualstudio.com/items?itemName=swellaby.vscode-rust-test-adapter on hold for quite a while now as I can't really justify investing time into it given that the capabilities (and more) are hopefully coming to RA :crossed_fingers:
There's several notable feature gaps and bugs in that extension, so YMMV
The Java plugin for VS Code comes with a built-in test explorer panel. This panel shows all tests in the project, and allows running the full suite (link).
Rust Analyzer currently only provides a way to run a single unit test. This opens the console, and runs it there. There is no way to run multiple tests, or browse the tests. Making progress towards this would probably help carry the IDE experience forward.
Example
_Java test explorer workflow example (source)_