The critical missing functionality here is a convention for describing tests in terms of an XML file containing metadata to be validated, along with the expected results. If we have this then it will be possible for any member of the team to write tests and verify that they are valid under all the validation environments.
Here's a straw man for such a convention and implementation. It might shake out slightly differently when I implement it, but it shouldn't be too far from this. Ruby turns out to be a pretty close match to this kind of problem so I don't think it will be a lot of work.
When I say "option" here I mean that you could make an argument for either approach, not that we should implement both.
Every .xml file under tests/xml/ constitutes a test. The additional xml/ prefix here is to allow for the possibility of adding other kinds of test later.
This test is named by its relative path under tests/ (option: under tests/xml/), without the .xml at the end. Example: xml/entityID/bad_id_localhost
There is no significance to the arrangement of directories. The expectation would be to divide things up by subject area, but we could also use issue numbers when appropriate. Any depth of nesting should be possible.
If there is a .yaml file associated with the .xml file, it contains test options. For example, tests/xml/entityID/bad_id_localhost.yaml.
If there is no .yaml file, the test is expected to succeed.
If the .yaml file contains a key expected (option: expect) then it is an array of expected statuses. Note that this implies ordering.
Each status is encoded as an object with name, status and message keys. (check protocol identifiers for these keys)
If there's no expected key, that's equivalent to expected: []; the test should therefore succeed. Same as omitting the .yaml file.
If the .yaml file contains a key validators then it provides a list of validators to run the test against:
If validators is an array, then each element is a validator name.
If validators is a string, that's equivalent to [singleton].
If validators is absent, or there's no .yaml file, that's equivalent to [default]: just run the test against the default validator.
In other words, an absent YAML file is equivalent to:
validators: default
expected: []
A test passes if the expected results match the actual results:
Arrays are the same length.
Each array element has matching components.
If a test passes, there's no output unless we're in a verbose mode. If we have a verbose mode, the "pass" information would probably want to include the test name, validator name and endpoint used.
If a test fails, we output:
Expected results (in YAML),
Actual results (in YAML),
Test name, validator name and endpoint used.
This allows us to add a new test and generate the YAML we need to get it to pass, without doing that manually.
We need to be able to run the test driver on all tests (the default) or on a given test by name (as above, hierarchical path to the .xml file, relative to tests/xml/ and without the .xml)
The XML test driver runs every available test (or just the single named test) against all four validator endpoints, for each of the validators specified in the individual test's options.
A variation of this is done. We can create new issues for subsequent work.
Differences:
Statuses turn out to have component_id instead of name.
All tests show the combination being run. There's no way at present to silence this. I think this will probably need to change when we have a real inventory of tests. We probably need quiet and verbose modes as well as a relatively compact normal mode.
The critical missing functionality here is a convention for describing tests in terms of an XML file containing metadata to be validated, along with the expected results. If we have this then it will be possible for any member of the team to write tests and verify that they are valid under all the validation environments.
Here's a straw man for such a convention and implementation. It might shake out slightly differently when I implement it, but it shouldn't be too far from this. Ruby turns out to be a pretty close match to this kind of problem so I don't think it will be a lot of work.
When I say "option" here I mean that you could make an argument for either approach, not that we should implement both.
.xml
file undertests/xml/
constitutes a test. The additionalxml/
prefix here is to allow for the possibility of adding other kinds of test later.tests/
(option: undertests/xml/
), without the.xml
at the end. Example:xml/entityID/bad_id_localhost
.yaml
file associated with the.xml
file, it contains test options. For example,tests/xml/entityID/bad_id_localhost.yaml
..yaml
file, the test is expected to succeed..yaml
file contains a keyexpected
(option:expect
) then it is an array of expected statuses. Note that this implies ordering.name
,status
andmessage
keys. (check protocol identifiers for these keys)expected
key, that's equivalent toexpected: []
; the test should therefore succeed. Same as omitting the.yaml
file..yaml
file contains a keyvalidators
then it provides a list of validators to run the test against:validators
is an array, then each element is a validator name.validators
is a string, that's equivalent to[singleton]
.validators
is absent, or there's no.yaml
file, that's equivalent to[default]
: just run the test against thedefault
validator..xml
file, relative totests/xml/
and without the.xml
)