Open sauclovian-g opened 4 months ago
One challenge is that none of SAW's current integration tests check the contents of the error messages themselves—they just check if the test succeeds or fails via the saw
subprocess's exit code. Long-term, I think it would be better if most (if not all) tests record the SAW output in a corresponding golden file that we update whenever the output of SAW changes to ensure that we do not accidentally regress SAW's error-message reporting in an important way. That would be a substantial amount of work, however.
In the spirit of incremental progress, perhaps we should just introduce individual golden tests for specific tests. That is, make the test case's test.sh
script redirect its stdout/stderr to a temporary file and compare it to the golden file using diff
, failing if diff
reports a difference.
Note that this pattern is abstracted by libraries such as tasty-golden
and tasty-sugar
, but using these libraries might require setting up these test cases differently from other integration tests, and it's unclear to me if that's worth the effort at this stage.
Right, I figured to start with I'd have the test script do that. We can probably have all the error-message tests share most of the script material.
There should be a test for each known error message. (For things like type errors, there should be multiple cases that trigger them from various contexts and so forth.)
Some stuff to specifically not forget: