Build Spire.sln in the Debug Win32 configuration (I'll add support for changing this later)
This includes the SpireTestTool project
Run test.bat from the root of the Spire repository
This will enumerate and run tests, and report a summary of pass/fail results
The flow of the current test runner is:
Enumerate all .spire files in Tests/FrontEnd (eventually we should add other directories)
For each file (say, foo.spire) execute the Debug/Win32 SpireCompiler.exe on the file, and capture stdout, stderr, and the result code
Format the result code, stderr, and stdout into a single "actual output" string
Try to load a foo.spire.expected file to a string, and compare the actual output to the expected
If no foo.spire.expected file is found, expect the result code to be zero, and both stderr/stdout to be empty
If the output doesn't match what is expected, then write the actual output to foo.spire.actual
When a test fails the user can then manually diff the foo.spire.expected and foo.spire.actual files to diagnose the issue.
This is only a very basic first pass, and needs a lot of tweaking to get the policy right as we add more tests.
In particular, we'll need a plan for how to properly compare expected GLSL/HLSL output without getting bogged down every time we make a formatting change. We also need an eventual plan for end-to-end runnable tests (that produce images).
Getting to this basic step took longer than expected simply because I started out doing my own implementation of all the platform-specific code that the test runner needed, rather than use CoreLib. I got that version working, but decided it was better to just use CoreLib for now, and then refactor it to be closer to what we want long term bit by bit (which will be easier to do once we have some decent regression tests in place).
The basic process for running tests right now is:
test.bat
from the root of the Spire repositoryThe flow of the current test runner is:
Tests/FrontEnd
(eventually we should add other directories)foo.spire
) execute the Debug/Win32SpireCompiler.exe
on the file, and capture stdout, stderr, and the result codefoo.spire.expected
file to a string, and compare the actual output to the expectedfoo.spire.expected
file is found, expect the result code to be zero, and both stderr/stdout to be emptyfoo.spire.actual
When a test fails the user can then manually diff the
foo.spire.expected
andfoo.spire.actual
files to diagnose the issue.This is only a very basic first pass, and needs a lot of tweaking to get the policy right as we add more tests. In particular, we'll need a plan for how to properly compare expected GLSL/HLSL output without getting bogged down every time we make a formatting change. We also need an eventual plan for end-to-end runnable tests (that produce images).
Getting to this basic step took longer than expected simply because I started out doing my own implementation of all the platform-specific code that the test runner needed, rather than use
CoreLib
. I got that version working, but decided it was better to just useCoreLib
for now, and then refactor it to be closer to what we want long term bit by bit (which will be easier to do once we have some decent regression tests in place).