I like option B, and I've started in a similar direction with https://github.com/iree-org/iree-test-suites/pull/23. That has a single test function per model that runs all stages (import, compile, run). Tests set their expected result using for example @pytest.mark.xfail(raises=IreeRunException) or @pytest.mark.xfail(raises=IreeCompileException)
See the previous issue for this at https://github.com/nod-ai/SHARK-TestSuite/issues/253
Config files for the ONNX operator tests use this schema: https://github.com/iree-org/iree-test-suites/blob/03f10e99d5f80696107038f3e8da8525aa31d50a/onnx_ops/conftest.py#L26-L53
(aside: that schema could be encoded in a file for validation/reference, rather than just included in a comment)
Right now test cases are included in one of these lists or not mentioned at all
skip_compile_tests
skip_run_tests
expected_compile_failures
expected_run_failures
While this lets us add new tests without needing to update existing files, it doesn't make it clear how many tests are included and which are passing.
Now that test results can be automatically reflected back into config files using https://github.com/iree-org/iree-test-suites/blob/main/onnx_ops/update_config_xfails.py, we could for example
A) also list passing tests:
B) list test statuses directly:
I like option B, and I've started in a similar direction with https://github.com/iree-org/iree-test-suites/pull/23. That has a single test function per model that runs all stages (import, compile, run). Tests set their expected result using for example
@pytest.mark.xfail(raises=IreeRunException)
or@pytest.mark.xfail(raises=IreeCompileException)