Open ScottTodd opened 1 month ago
Forking this issue from https://github.com/nod-ai/SHARK-TestSuite/issues/288.
This workflow https://github.com/iree-org/iree-test-suites/blob/main/.github/workflows/test_onnx_ops.yml has some duplicated code at https://github.com/iree-org/iree/blob/main/.github/workflows/pkgci_test_onnx.yml.
Most of the duplication is boilerplate and keeping it synced across commits is a bit of a chore. See https://docs.github.com/en/actions/using-workflows/avoiding-duplication for the options available to us.
I'm imagining something downstream like
jobs: test_onnx: steps: - name: Check out external TestSuite repository uses: actions/checkout@v4.1.7 with: repository: nod-ai/SHARK-TestSuite ref: v1 path: SHARK-TestSuite submodules: false lfs: false - name: Run onnx tests uses: SHARK-TestSuite/.github/workflows/test_onnx.yml with: pytest_flags: ... config_file: ...
or
test_onnx: uses: iree-org/test-suite@v1 # or nod-ai/iree-test-suite with: pytest_flags: ... config_file: ...
where the workflow would include running tests, checking for diffs in the config file and uploading the new file(s), etc.
For local and CI usage, an entry point script (could be pytest) that runs all test suites here would be helpful.
pytest
Some things that change from run to run or machine to machine:
Forking this issue from https://github.com/nod-ai/SHARK-TestSuite/issues/288.
This workflow https://github.com/iree-org/iree-test-suites/blob/main/.github/workflows/test_onnx_ops.yml has some duplicated code at https://github.com/iree-org/iree/blob/main/.github/workflows/pkgci_test_onnx.yml.
Most of the duplication is boilerplate and keeping it synced across commits is a bit of a chore. See https://docs.github.com/en/actions/using-workflows/avoiding-duplication for the options available to us.
I'm imagining something downstream like
or
where the workflow would include running tests, checking for diffs in the config file and uploading the new file(s), etc.
New thoughts since filing that:
For local and CI usage, an entry point script (could be
pytest
) that runs all test suites here would be helpful.Some things that change from run to run or machine to machine: