Closed marcelbeumer closed 1 year ago
Thinking about this, it might make more sense as a separate consumer as the flows would likely be quite different. VS Code for example appends new output to the existing window which I think makes the most sense here as output disappearing on reruns would be annoying if the whole point is to be persistent.
So I'm thinking something like an output_panel
as a consumer which when a test runs will append the output to the window that it maintains. If a group of tests is run, the top level position (e.g. directory) will have its output used put into the panel. This will work for most cases but may have issues with adapters that provide different outputs for each position such as neotest-plenary where it will have a separate output for each file. I'm not sure of the best way to handle this, just go through all the results and append all of them? Open to suggestions
I think that just appending would be a great start!
I've got this in a PR, would appreciate if people would check it out and let me know of any issues https://github.com/nvim-neotest/neotest/pull/153. I'll leave it there for a few days to test, but it should be good to go :smile:
@rcarriga I just tried it out, it works good! 💪🏻 I have couple of suggestions for future improvements:
on_attach
function doesn't have to be calledThanks for testing!
scroll to bottom on open (maybe only the first time after tests are executed)
This can be done by using the following:
vim.api.nvim_create_autocmd("FileType", {
pattern = "neotest-output-panel",
callback = function()
vim.cmd("norm G")
end,
})
append output as tests are running, so that on_attach function doesn't have to be called
Unfortunately that won't work because multiple processes can be running at once (either multiple runs by the user or multiple processes spawned internally due to how adapters can work) and so the output would be mixed. Adapters can also transform the output from machine readable formats such as JSON to human readable text so we can't just show the raw output from test processes.
I've found no issues so it's been merged :smile:
Perhaps I am using the feature wrong but this is what i found using neotest-go.
The test file:
package main
import (
"testing"
)
func TestFoo(t *testing.T) {
t.Error("Foo is not working")
t.Fail()
}
func TestBar(t *testing.T) {
t.Error("Bar is not working")
t.Fail()
}
Running the entire suite gives both the "raw" json ouput which the test adapter uses and the parsed ouput. These seem to come in random order:
{"Time":"2022-11-23T09:19:21.869175+01:00","Action":"run","Package":"neotest-test","Test":"TestFoo"}
{"Time":"2022-11-23T09:19:21.869446+01:00","Action":"output","Package":"neotest-test","Test":"TestFoo","Output":"=== RUN TestFoo\n"}
{"Time":"2022-11-23T09:19:21.869463+01:00","Action":"output","Package":"neotest-test","Test":"TestFoo","Output":" foobar_test.go:8: Foo is not working\n"}
{"Time":"2022-11-23T09:19:21.869482+01:00","Action":"output","Package":"neotest-test","Test":"TestFoo","Output":"--- FAIL: TestFoo (0.00s)\n"}
{"Time":"2022-11-23T09:19:21.869491+01:00","Action":"fail","Package":"neotest-test","Test":"TestFoo","Elapsed":0}
{"Time":"2022-11-23T09:19:21.869509+01:00","Action":"run","Package":"neotest-test","Test":"TestBar"}
{"Time":"2022-11-23T09:19:21.869513+01:00","Action":"output","Package":"neotest-test","Test":"TestBar","Output":"=== RUN TestBar\n"}
{"Time":"2022-11-23T09:19:21.869516+01:00","Action":"output","Package":"neotest-test","Test":"TestBar","Output":" foobar_test.go:13: Bar is not working\n"}
{"Time":"2022-11-23T09:19:21.86952+01:00","Action":"output","Package":"neotest-test","Test":"TestBar","Output":"--- FAIL: TestBar (0.00s)\n"}
{"Time":"2022-11-23T09:19:21.869887+01:00","Action":"fail","Package":"neotest-test","Test":"TestBar","Elapsed":0}
{"Time":"2022-11-23T09:19:21.869904+01:00","Action":"output","Package":"neotest-test","Output":"FAIL\n"}
{"Time":"2022-11-23T09:19:21.869921+01:00","Action":"output","Package":"neotest-test","Output":"coverage: 0.0% of statements\n"}
{"Time":"2022-11-23T09:19:21.870142+01:00","Action":"output","Package":"neotest-test","Output":"FAIL\tneotest-test\t0.131s\n"}
{"Time":"2022-11-23T09:19:21.870172+01:00","Action":"fail","Package":"neotest-test","Elapsed":0.132}
=== RUN TestFoo
foobar_test.go:8: Foo is not working
--- FAIL: TestFoo (0.00s)
=== RUN TestBar
foobar_test.go:13: Bar is not working
--- FAIL: TestBar (0.00s)
Running a single test does not include the raw json output but the indentation issue is still there.
How the output looks seem to depend on some neovim settings such as expandtab
and shiftwidth
.
However no matter what settings I use, I cannot seem to make the ouput look correct.
Running the test multiple times just continues the indentation issue.
=== RUN TestBar
foobar_test.go:13: Bar is not working
--- FAIL: TestBar (0.00s)
=== RUN TestBar
foobar_test.go:13: Bar is not working
--- FAIL: TestBar (0.00s)
=== RUN TestBar
foobar_test.go:13: Bar is not working
@akarl Both of these issues should now be fixed in latest master :smile:
Closing this as the feature request has been fulfilled, anything else can be a new issue
First of all, thanks for working on neotest! I'm looking for the right workflow(s), also peeking at other established editors.
I think it would be be helpful if the output consumer, optionally, can
open_output
in it's own fixed and separate window at the bottom, just like the summary has it's own fixed panel on the right (positions configurable + possible to just manipulate like any other window). Together withopen_on_run
, you would get a classic IDE experience likeWhat do you think?