Closed napalu closed 7 years ago
Looking at the code, currently tests/testament/tester.nim
is only testing the compiler, none of the tests are actually being run. The action passed in testspec
is 'compile' and the template command which is actually called is defined in specs.nim
as:
const
cmdTemplate* = r"compiler" / "nim $target --lib:lib --hints:on -d:testing $options $file"
What are the expectations? That tester should actually compile + run all files, or is the compiling only by design? I would personally expect tester to run which it actually would if it were passed the action run
or runNoSpec
but then the tests would need to be revised because we'd then get output like:
A cheap fix would be to just change the default command template in tests/testament/specs.nim
to:
const
cmdTemplate* = r"compiler" / "nim $target --lib:lib --hints:on -d:testing $options -r $file"
Which would actually result in the desired behaviour of actually running the tests but there may well be cases where we want tests to be compile only and not compile and run. There's other options but I'd like to get some input before I start looking at them.
Feedback?
Then it works as expected and the test needs to be fixed. The default "only checks that it compiles" is correct. Use an 'output' spec to run the test and check for its output.
Changed the tnre test suite as suggested.
@dom96 Maybe this is the issue you are referring to in #5967? That's exactly the issue I encountered there as well, so it's probably good to discuss here.
My suggestion would be:
action: compile
.output
matching optional, i.e., only check if it is specified (e.g. by making spec.outp
nillable, or having a spec.outpCheckRequired
). This makes the maintenance of tests suites like this much easier, because we don't have to touch the output
on every added suite -- what really matters for them is the exit code.The last point is the only critical, because I don't know if there are many tests which currently have action: run
without specifying output
and relying on the check of an empty output. But I would assume there aren't so many because most tests I have seen specify output
anyway.
If @Araq likes the idea I could work on that.
Ugh, no. The tester is fine as it is. Tests that use unittest
are confused anyway, IMO. We don't need "unittests" inside "tests", these are integration tests. How can you "unit test" for a decent compiler error message anyway.
Just a quick check (only in stdlib) for tests which are currently not run, but look like they should:
stdlib/tjsontestsuite.nim
stdlib/thttpcore.nim
stdlib/tnativesockets.nim
stdlib/thashes.nim
stdlib/tnet.nim
stdlib/trstgen.nim
For a test case based on unittest
it is sufficient to test for the process exit code.
I think for things like the standard library it would not hurt if checks go beyond "it compiles" to check actual runtime behavior. They are integration tests from the point of view of the compiler, but for the standard library they are valuable unit tests.
Maybe this is the issue you are referring to in #5967?
Yep. "compile + run" by default sounds like a good solution.
Also, if you're willing to fix this, any chance you could also write some documentation explaining the test suite and the different options supported (including action: compile
)? It's an annoying guess game right now.
The problem with changing the default behaviour, is that many tests will fail because, although some seem to be setup to run+compile they've only ever been compiled and so errors and stuff (because of changes or whatever) have not been caught. It would require checking all current tests and possibly changing many of them.
But I agree that compile + run as default would make sense ;)
I had already enabled "compile + run" for a quick check, and there weren't too many tests that fail. Which is why I thought it's doable. But without Araq's approval I'll better hold my horses :-). I think his main concern is about unittest
itself, and that it makes debugging compiler bugs more difficult. I can't really judge it, but I would encourage testing for runtime behavior in some way to ensure the quality of the standard library a bit more. Writing runtime tests without unittest
and purely on the spec.outp
matching basis feels a bit cumbersome, e.g. like in this example.
@dom96 There is some sparse documentation here: https://github.com/nim-lang/Nim/blob/devel/doc/contributing.rst#compiler
Related to #5469 -
tester
passes test although running the individual suite fails.