Open sagudev opened 5 months ago
I have yet to really go through each of the points in the OP, but: Partial runs are actually the motivation behind https://github.com/ErichDonGubler/moz-webgpu-cts/issues/25.
Thinking things through more, @sagudev, you might be interested in the workflow that we're undertaking at Mozilla. We recently added the implementation-status: backlog
property to all test cases (see upstream docs on metadata), and have incrementally removed it as we've determined tests are worth blocking on in CI. We now use both tier 2 and tier 3 of Firefox CI.
I'm currently calling the migration of a test from a less stable tier to a more stable tier by removing implementation-status: backlog
a "promotion". We do experiments in promoting tests according to heuristics like "promote permanently PASS
ing tests" (already implemented), "promote tests that aren't observed to FAIL
or CRASH
" (to be implemented in #109), with more to come. The workflow then becomes:
wptreport.json
files.update-expected
as desired for backlog
ged tests, and create a commit.update-backlog
to tentatively promote tests. Commit and submit to CI as an experiment.We consider using implementation-status: backlog
to be valuable because:
wptrunner
has CLI support for filtering tests by implementation status, so the lift to adjust our CI was light.expected
; sometimes, we want to model that a test should pass, but not block CI on it yet. This came up recently with bug 1897131.
Basic support for servo has landed in #92, but there are still some things we need before servo can move off #80 :
Also, when developing new webgpu features on servo I found useful to have (https://github.com/ErichDonGubler/moz-webgpu-cts/pull/80/commits/773f118522461d45bfd8dd5183fd8a3712c8965e) to only set new good expectations, or https://github.com/ErichDonGubler/moz-webgpu-cts/pull/80/commits/039200b6100133c7fadab40cfef04771960b4a9f to set only those that are reported (if we do partial run of WPT tests)