Closed iay closed 4 months ago
/cc @philsmart
I think we can simplify this a bit and make it more useful by casting it in terms of:
skip
option which, if true
, results in the test being skipped.Here's an example:
---
expected:
# global expected value
override:
# skip entirely on Z
- endpoint: Z
skip: true
# different results on X and Y
- endpoint: [ X, Y ]
expected:
# expected results on X and Y
For each option used by the executor, it walks the override
array in sequence looking for matches, and within them the option in question. If the option is not present in any matched override, the global value is taken (or, failing that, an undefined value which is defaulted on a per-option basis).
So the skip
option will be true
for endpoint Z, otherwise false
(that option's default).
The expected
option will be the overridden value on X and Y, otherwise the global value, which defaults to []
.
This is done. I need to land a change for #79 before it's going to be worth pulling down a new version, as otherwise when you rebuild you'll get failures from that test.
At the moment, every test is executed against every endpoint. However, there are situations where that makes it very difficult to add a test at all, either because it simply doesn't apply to a particular endpoint (e.g., relies on a library which is absent) or would give different results in different contexts.
The latter use case might be best supported through providing result overrides in the sidecar file (see #18).
To address the former use case, I propose adding conditional execution to tests, again by adding options in the sidecar file. I think it would make sense to make this extensible but for now a simple "run only on endpoints X and Y" or "run except on endpoints X and Y" would be enough:
Wrapping the conditions in a
conditions
object rather than just leaving them "loose" at the top level would allow for treating the conditions as a program object, although I don't plan to do that in a first iteration.