usethesource / rascal

The implementation of the Rascal meta-programming language (including interpreter, type checker, parser generator, compiler and JVM based run-time system)
http://www.rascal-mpl.org
Other
406 stars 78 forks source link

Extend test API #1831

Open linuswagner opened 1 year ago

linuswagner commented 1 year ago

Is your feature request related to a problem? Please describe. When a test written for Rascal fails, it is often unclear why exactly that was as a test in Rascal only returns true or false. This makes it hard to determine from the output of a test what the actual value was and requires the tester to modify the test with print statements.

Describe the solution you'd like Extend the Rascal Test API to allow for methods like the ones found in JUnit. The test log should then show the expected and actual value to allow for easier analysis of the test.

linuswagner commented 1 year ago

Possible workaround: use exceptions or assert instead of booleans to make the test fail

jurgenvinju commented 1 year ago

This is an excellent idea; and a duplicate of the ancient #1010

jurgenvinju commented 1 year ago

I think the compiler can just interpret the existing boolean operators differently. There is no need for an API like JUnit.assertEqual or something. If I write test f(int a, int b) = a + b == b + a it is already clear that I am asserting that both sides of the == are equal.

linuswagner commented 1 year ago

I think the compiler can just interpret the existing boolean operators differently.

Yes, for equality that works perfectly. But depending on how complex the tests get, we are still in trouble:

jurgenvinju commented 1 year ago

There are limits with what we can do, indeed. However if we bring it up too the level of junit asserts, (equals, unequals, booleans) with proper diffs, we are already much richer than what we have now (false).

Also input minimisation (random or brute force or heuristically) would help a lot.

jurgenvinju commented 1 year ago

The next step would be to bring in a theory of pattern matching and parsing; i.e if a match fails we could explain why or produce a smaller test case that also fails for the same reason.