ErichDonGubler / moz-webgpu-cts

An extremely fast (but opinionated) tool for working with WPT metadata while developing an implementation of WebGPU in a web browser.
Apache License 2.0
0 stars 2 forks source link

Errors on parsing valid metadata #79

Open sagudev opened 4 months ago

sagudev commented 4 months ago

Example expectation:

[canvas_complex_rgba8unorm_store.https.html]
  expected: FAIL

gives

× found ' ' expected test section header, or indentation at the proper level
   ╭─[/servo/tests/wpt/webgpu/meta/webgpu/webgpu/web_platform/reftests/canvas_complex_rgba8unorm_store.https.html.ini:1:1]
 1 │ [canvas_complex_rgba8unorm_store.https.html]
 2 │   expected: FAIL
   · ─
   ╰────
ErichDonGubler commented 4 months ago

Hi there! Happy to see others using this tool. What version/commit are you reproducing this from?

sagudev commented 4 months ago

I am using https://github.com/sagudev/moz-webgpu-cts/tree/servo (based off main) on servo.

GitHub
GitHub - sagudev/moz-webgpu-cts at servo
Tools to analyze and adjust Web Platform Tests coverage in Firefox, currently catered exclusively towards the WebGPU team. - GitHub - sagudev/moz-webgpu-cts at servo
sagudev commented 4 months ago

I think it's happening because it's not expecting FAIL as expectation for test but only for subtests. Relevant code: https://github.com/ErichDonGubler/moz-webgpu-cts/blob/a82d347bb86f3dfb801b1d56fb22422f476715f4/moz-webgpu-cts/src/metadata.rs#L978-L984

So we need to add FAIL to TestOutcome.

sagudev commented 4 months ago

https://github.com/ErichDonGubler/moz-webgpu-cts/pull/80/commits/e90940f21efb0748346c2e87c9d3954417392711 now fixes metadata parser. The only thing missing is analyzer stuff for tests_with_fails.

sagudev commented 4 months ago

Still fails on:

[canvas_composite_alpha_bgra8unorm_opaque_draw.https.html]
  expected: [PASS, FAIL, CRASH]

so we will probably need to handle such cases as subtest altogether.

Also failure on disabled (because it's valid to have any value):

[cts.https.html?q=webgpu:api,operation,adapter,requestDevice:features,known:*]
  disabled: reasons for being disabled
jimblandy commented 4 months ago

@sagudev Just FYI, Erich's on vacation until next Wednesday, so you probably won't get a response here until then.

sagudev commented 4 months ago

@sagudev Just FYI, Erich's on vacation until next Wednesday, so you probably won't get a response here until then.

Thanks for letting me know.

ErichDonGubler commented 4 months ago

Thanks for the assist, @jimblandy! 🙂

ErichDonGubler commented 4 months ago

Sigh, the lack of a good diagnostic is definitely https://github.com/ErichDonGubler/moz-webgpu-cts/issues/30. Sorry about that!

ErichDonGubler commented 4 months ago

@sagudev: RE: analyzer stuff: Does Servo actually use that tool? I had not expected it to be used by anybody but Mozilla. 😅

ErichDonGubler commented 4 months ago

@sagudev: RE: adding FAIL to the test outcome data model: This is an interesting weirdness in WPT. My understanding is that expected outcomes in tests are a strict superset of the ones that can be discovered in subtests (CC @jgraham for clarity). I think it's clear that we need to add it. PRs welcome! I don't think I'm going to be able to prioritize it for a while, but I should have bandwidth for review. The code you linked to (https://github.com/ErichDonGubler/moz-webgpu-cts/commit/e90940f21efb0748346c2e87c9d3954417392711) LGTM (minus a nit or two I'd present in review), so I bet we can start with that.

ErichDonGubler commented 4 months ago

@sagudev: Can you please clarify what you mean by the following (original link)?

so we will probably need to handle such cases as subtest altogether.

sagudev commented 4 months ago

Does Servo actually use that tool? I had not expected it to be used by anybody but Mozilla

Currently no, but I am very pleased with experiments I did with it (it's so much faster than ./mach update-wpt we have), although I needed wptreports we don't normally do in CI. I am currently investigating some rare CRASHes, before I can return to my PR here.

sagudev commented 4 months ago

What I discovered is that all valid values for subtests are also valid for test that have no subtests. You can take a look at PR how I hacked around this.

jgraham commented 4 months ago

I think that's not quite true; you can't ever have NOTRUN as a top-level test status (for tests without subtests the test is always at least started); see https://searchfox.org/mozilla-central/source/testing/mozbase/mozlog/mozlog/logtypes.py#192-222 for details.

logtypes.py - mozsearch