Closed jstensland closed 1 week ago
Good suggestion, and sounds like it could be quite straight forward to fix this.
My guess is we could set the status to "skipped" (and warn, like you mention) if the "no tests to run" message appears in the output for that test execution.
I'll look into this during the week.
@jstensland if you wish, you can give #42 a try by configuring neotest-golang to use the PR's branch:
return {
{
"nvim-neotest/neotest",
dependencies = {
"nvim-neotest/nvim-nio",
"nvim-lua/plenary.nvim",
"antoinemadec/FixCursorHold.nvim",
"nvim-treesitter/nvim-treesitter",
{
"fredrikaverpil/neotest-golang", -- Installation
+ branch = "skip-tests-not-found",
}
},
config = function()
require("neotest").setup({
adapters = {
require("neotest-golang"), -- Registration
},
})
end,
},
}
Not sure if it's just my setup, but that config looked for skip-tests-not-found
in the nvim-neotest/neotest
repo.
I was able to point at your branch though and it works well. Thanks for the quick work!
Oof, my bad, I edited the post above. I misplaced the branch
directive! 🤦
Thanks for the edit, as that's simpler than what I did. I'm definitely still learning lua, lazy.nvim etc. I'm still surprised every time you can just trade out the type of a config value between string, table or function. 🤷
Hehe, yeah, same here. It took some time to wrap my head around that more or less everything in Lua is a (potentially nested) table and it's incredibly easy to make mistakes.
When I trigger an individual test in a testify suite, it is marked as passing even if no test was run. The run spec produces the following command, which doesn't match the test, due to it being in a testify suite and needing
^SuiteName/TestName$
I know that's not supported yet, but if you try to run an individual test and no tests are run, it seems like it should fail and warn, not pass silently