fredrikaverpil / neotest-golang

Reliable Neotest adapter for running Go tests in Neovim.
MIT License
58 stars 6 forks source link

File run causes cov report messed up #23

Closed quolpr closed 2 weeks ago

quolpr commented 1 month ago

I am not sure how test is running for current test file (with require("neotest").run.run(vim.fn.expand("%"))), but the coverage report output is actually for one of the tests cases, instead of summing up of all tests.

fredrikaverpil commented 1 month ago

Yes, this is (by me) known, actually. It's because the tests are run one by one, even when you run all tests in a file (and when running the entire test suite).

Because of this, you'll always only see the coverage of the last executed test at the end of the execution.

The reason why it works like this is it was the fastest and simplest way to get everything working.

I've been wanting to optimize "run test file" and "run test suite" so that the test results are parsed/collected and the neotest node tree is updated accordingly. This brings a number of benefits such as also being a lot faster. But the downside of this brings complexity and room for errors... which means I'm not in a hurry to implement this.

Unfortunately, there's no workaround for now.

EDIT: Well, you can of course run go test -coverprofile=coverage.out ./... outside of Neovim as a workaround.

quolpr commented 1 month ago

@fredrikaverpil yeah, make sense πŸ‘ One solution is spinning in my mind, what if we will be collecting all test names from file, and the just run them with usual grep, like:

go test -v magnit/mag_collect_service/internal/service/order/issuer -run "^(TestIssuer_NotFound|TestIssuer_WrongState|TestIssuer_IssueCourier_WithCode)" -coverprofile ./coverage.out

Will it be easy to implement?

fredrikaverpil commented 1 month ago

Neotest keeps a node tree of tests. If you open the summary window, you can visualize this node tree. Each node has data attached, such as the test name, status and positions in the file etc. This is all detected by AST parsing (already in place).

By traversing this node tree it's possible to construct a command, kind of like the one you mentioned. Go doesn't really take a filepath, so we'll probably have to do what you suggested by listing the test names. A challenge here will be to be able to construct a valid regex.

Then when executing this single command, you would have to parse the output back into the node tree and populate statuses such as passed/failed etc along with handling error messages. I'm not doing this today, as I'm "cheating" a bit and have postponed having to deal with this by running all tests on a per-test basis, which makes Neotest handle all of this somewhat automatically.

Apart from running all tests of a file like this, you could also run all tests in a folder (and its subfolders/subfiles) in a similar fashion. I've been meaning to look into this at some point, as it's super inefficient today to run a lot of tests, as they all will run individually (including compilation on per-test basis).

I'm not sure if the strategy of running all tests in a file will be the same as all tests within a directory as the latter will be a very easy command to construct as you would only have to provide the path to the folder (no need to provide a list of tests). But it would be great if the collection of test output and populating of node tree data could use the same approach in both cases.

I've dealt with dynamically generated tests in Go, which cannot be detected by AST parsing and can't be detected other than during execution time. For example, the test name needs runtime execution to be calculated. For this reason alone, I'm thinking it would be nice to also be able to update the node tree with additional tests that can't be detected by the AST parsing, by running the test suite. This is actually how vscode does test detection too; you might have to the entire test suite to detect all tests. I guess I'd call it a hybrid AST/runtime test detection. But this is an edge case which is not something to tackle first. πŸ˜„

Anyway, I think, step one could be to explore how to parse the output from having executed go test ./some/folder/... and properly populate the node tree with statuses. Perhaps leave error message collection out to begin with. If it's possible to do this relably, we can build from there and apply the same logic to running tests of a test file.

fredrikaverpil commented 1 month ago

@quolpr FYI I made some great progress tonight on all this in #24 (starting to tackle running all tests in a folder, or rather, go package) but it still requires a great deal of work and then a great deal of testing during daily work... so to make sure I catch all bugs and gotchas.

quolpr commented 1 month ago

@fredrikaverpil Wow, such a detailed comment. Thank you for the clarification! Actually, after everything you described, it seems the coverage report might not be a necessary feature, considering the amount of effort required to introduce it. Additionally, it could potentially be buggy.

By the way, don't rush yourselfβ€”it's open source! πŸ˜…

quolpr commented 1 month ago

Also, it's sad that neotest is somehow buggy and we can't use just JSON output. That would solve a lot of problems πŸ€” But maintainers don't look to be active for now

fredrikaverpil commented 1 month ago

it seems the coverage report might not be a necessary feature, considering the amount of effort required to introduce it.

Coverage is always going to be yanky unless you know exactly what you want to measure, and understand which command is the appropriate one to measure that. In this case, if you run a single test case, you'll get coverage for that. But if you run all tests of a file, or all test of a folder, my goal is you should get coverage for that. I view the current behavior of always running tests on a test-by-test case as an "MVP", if you will. It was the fastest way to get something up and working while also learning the Neotest framework.

Proper coverage support by itself might not be a necessary feature, like you say. But the test execution is also extremely inefficient right now. So this is something I want to address, and I believe I've found a great way forward. But I'm starting with "run all tests in a folder (and sub-folders)" as that is a little easier, and can be built upon for "run all tests of a file".

neotest is somehow buggy and we can't use just JSON output.

I guess you are referring to how you get the stdout/stderr output mixed into parsed test results, as I reported in https://github.com/nvim-neotest/neotest/issues/391?

I've found a pretty good workaround though, implemented in the adapter right now, which circumvents that. Basically, the raw output is parsed (as JSON) and afterwards, the file on disk containing the original output is erased, as if there was no output from go test. This means that there is nothing for Neotest to "leak" into its test output window. I think there are no side-effects of this, actually. But I agree, it would be better to get this fixed upstream.

fredrikaverpil commented 3 weeks ago

@quolpr in case you want to help out test what I have in #24, just set the branch for your package manager. πŸ˜„

It's not exactly what you asked for, as that branch enables running a folder of tests (although it's more of go package/s). But it would still be valuable with your input/feedback on how e.g. coverage calculation works for you with this.

quolpr commented 3 weeks ago

@fredrikaverpil wow, thank you for all the work, but I decided to write my own plugin https://github.com/quolpr/quicktest.nvim because I find neotest to be too bloated for me πŸ€”

fredrikaverpil commented 3 weeks ago

Nice work @quolpr ! πŸ‘ πŸŽ‰

No worries, I wanted to look into this anyway but your input prompted me to look into this sooner than later. Feel free to close this issue if you no longer care to track this. I've still got this on my radar for my adapter.

fredrikaverpil commented 3 days ago

Just a quick headsup I'm about to merge #92 which fixes this original issue (coverage messed up). πŸ˜„