Harnessing via praat was extremely unwieldy and I soon switched to Python for ease of string manipulation, process control etc.
The harness was tested using Python 2.7 but should be compatible with 2.5+ and 3.x (not tested). Python 3 compatibility will be important as Python 2 will be unsupported past the end of 2019, but Python 2.x will linger e.g. as the system Python of many OS X versions still in use.
Because of the way that praat "include" works, at present it copies the tests into the src/ directory before running. This is clunky but most users won't need the tests, so hopefully it's acceptable. Tests already present in src/ are not overwritten, and are copied back from src/ to test/ at the end of the run, so one could intuitively edit these and test them individually.
praat --run doesn't produce consistent return codes for success or failure across platforms (works on Linux, doesn't on Windows, OS X untested) so instead output and error are searched for "assertion fails" and "not completed" to assess success or failure of the test. We can add other trigger strings to this as needed.
The harness finds all files test/*.praat and assumes these are tests to be run. At the moment there's one file per .praat file in src/, but this doesn't have to be the case.
The log file name is passed to each test for it to receive in a single form field, so that anything interesting can be logged out. At present test_correct_iseli_z.praat is the only (minimal) functioning test, which illustrates this.
Simple Python based test harness.
Harnessing via praat was extremely unwieldy and I soon switched to Python for ease of string manipulation, process control etc.
The harness was tested using Python 2.7 but should be compatible with 2.5+ and 3.x (not tested). Python 3 compatibility will be important as Python 2 will be unsupported past the end of 2019, but Python 2.x will linger e.g. as the system Python of many OS X versions still in use.
Because of the way that praat "include" works, at present it copies the tests into the src/ directory before running. This is clunky but most users won't need the tests, so hopefully it's acceptable. Tests already present in src/ are not overwritten, and are copied back from src/ to test/ at the end of the run, so one could intuitively edit these and test them individually.
praat --run doesn't produce consistent return codes for success or failure across platforms (works on Linux, doesn't on Windows, OS X untested) so instead output and error are searched for "assertion fails" and "not completed" to assess success or failure of the test. We can add other trigger strings to this as needed.
The harness finds all files test/*.praat and assumes these are tests to be run. At the moment there's one file per .praat file in src/, but this doesn't have to be the case.
The log file name is passed to each test for it to receive in a single form field, so that anything interesting can be logged out. At present test_correct_iseli_z.praat is the only (minimal) functioning test, which illustrates this.