Closed ratmice closed 1 year ago
So, AFAICT test_runner
isn't stable and still requires the custom_test_framework
feature from nightly.
I think I could probably hack something together though by running a proc-macro which does the glob
, and generates a #[test]
function for each entry of the glob that wraps a run_test_path(...)
.
Does that sound worth the effort/complexity?
I think I could probably hack something together though by running a proc-macro which does the glob, and generates a #[test] function for each entry of the glob that wraps a run_test_path(...).
Does that sound worth the effort/complexity?
Honestly, it doesn't sound worth the effort to me. Hopefully one day this feature will finally stabilise!
FWIW, I think we can merge this PR unless you want to do more to it?
As long as we're okay with the limitations, I would guess they don't affect bors which I would imagine does a clean build?
I would guess they don't affect bors which I would imagine does a clean build?
It does.
That said:
Anyhow it currently isn't ideal, i.e no output on success see cargo -vvv for output, and doesn't re-run when cargo test is run without build.rs or .test file changes.
Ah, that's because we've put "do the test" code in build.rs
I guess? This PR doesn't make that original sin any worse but, I agree, it would be nice to fix that.
One possibility (for a future PR) might be to move these parts of grmtools' testing to https://crates.io/crates/lang_tester?
I would guess they don't affect bors which I would imagine does a clean build?
It does.
That said:
Anyhow it currently isn't ideal, i.e no output on success see cargo -vvv for output, and doesn't re-run when cargo test is run without build.rs or .test file changes.
Ah, that's because we've put "do the test" code in
build.rs
I guess? This PR doesn't make that original sin any worse but, I agree, it would be nice to fix that.
Yeah, the current tests still get some output though because they all have additional #[test]
entries in cttests/src/lib.rs
.
I.e. they are expecting both codegen to succeed, and also subsequently that the generated code compiles/modules load.
So those others at least get some output. But these ones where we are expecting codegen to fail can't really follow the current pattern.
One possibility (for a future PR) might be to move these parts of grmtools' testing to https://crates.io/crates/lang_tester?
I haven't heard of that one, will have a look
OK, so I suggest we merge this PR and then in another PR think about improving the whole testing framework. Sound like a plan?
Sounds good.
bors r+
Build succeeded!
The publicly hosted instance of bors-ng is deprecated and will go away soon.
If you want to self-host your own instance, instructions are here. For more help, visit the forum.
If you want to switch to GitHub's built-in merge queue, visit their help page.
The first patch coalesces the missing token/term for lexer/parser errors, so they will both be emitted. The second patch, just moves some code in cttests/build.rs into it's own function in prep for adding cttests which result in codegen failure (the third patch).
The third patch may well require some work, it would probably ideally be within a
#[should_panic]
,#[test]
but I wanted to reuse the ctbuilder helpers within cttests/build.rs. Perhaps this could be put into a module to be shared between lib.rs and build.rs.I'm not sure (but I assume) that e.g.
OUT_DIR
env var is available within#[test]
. Anyhow it currently isn't ideal, i.e no output on success seecargo -vvv
for output, and doesn't re-run when cargo test is run without build.rs or .test file changes. I haven't looked into what the situation is with the stablization of#![test_runner]
but that would probably be ideal in allowing one to treat each glob entry as a test.Edit: One thing this doesn't capture yet is errors that don't panic but return errors from
build()
. Figured these could be done as we add a test that returns an error.