Open domenic opened 7 years ago
There are some normative statements about default UA rules in CSS specs as well. Typically new stuff that isn't stable enough yet to ask HTML to add to its stylesheet, or stuff that goes into SVG's style sheet rather than HTML's.
The CSS Working Group just discussed testing user agent stylesheets
.
<dbaron> dbaron: There are a lot of known issues where what's in the UA level vs. the preshint level
@dbaron HTML's rendering section differentiates UA styles and preshints. I'm not sure if they all match implementations, and it's a bit annoying to test since you need a user stylesheet to tell the difference I believe.
I'd like to ask the WPT/CSS community's opinion on how to write tests for this sort of thing. I have two ideas so far:
Reftests. E.g. take a <span>
(or maybe better, a <custom-undefined-element>
?) and style it according to the UA stylesheet. Then compare it to an unstyled instance of the element-under-test.
getComputedStyle() tests. I guess the idea here would be to do a similar setup, then do a deep comparison of the getComputedStyle()
results for the manually-styled element vs. the element-under-test.
I am leaning toward the second option here.
Note: I realized that computedStyleMap() tests are better for this purpose than getComputedStyle() tests.
I've tried to attempt this several times in the past, but hadn't found a way that would scale usefully. But now I might be on to something. Check this out:
The CSS can be taken verbatim from the spec (or imported to wpt like IDL files are).
Then the @namespace
rules need to be changed so that instead of applying to the HTML namespace, they apply to a bogus namespace (urn:not-html
).
The test includes the HTML elements we want to test. A script clones those elements, where the clones have the bogus namespace, and thus the CSS only apply to the clones, and any CSS property not included will have the CSS initial value.
The test could then compare all of the CSS properties that we're interested in comparing.
Problems:
:matches()
that doesn't seem to be supported cross-browser. This could be solved by post-processing the CSS, or by changing the spec to not use stuff that isn't cross-browser, or by having browsers implement the missing stuff.That's a pretty neat trick! To be clear, https://github.com/web-platform-tests/wpt/commit/70b4ec46dff7cd719ede27b1c445d263e8a2d70e only tests a subset of properties, but you'd be proposing testing them all, right? Cf. https://github.com/web-platform-tests/wpt/blob/b8cdc9bf2f140359cc60c30241675461e6a0a71a/std-toast/styles.html#L14-L38 and https://github.com/web-platform-tests/wpt/blob/b8cdc9bf2f140359cc60c30241675461e6a0a71a/std-toast/resources/helpers.js#L73-L85
The HTML spec uses stuff like :matches() that doesn't seem to be supported cross-browser. This could be solved by post-processing the CSS, or by changing the spec to not use stuff that isn't cross-browser, or by having browsers implement the missing stuff.
Post-processing seems reasonable, or just manually fixing the tests up as necessary.
Yes, the test I wrote now is a small subset, but I think the approach can scale to all or almost all of the CSS rules in the HTML Rendering section, and all HTML elements (across multiple test files, to not end up with too many tests in one file).
Inspired by https://github.com/whatwg/html/issues/2561#issuecomment-295783518.
The HTML Standard contains a supposedly-normative user agent stylesheet. I've also seen non-normative appendices in various CSS specs; not sure what that's about.
In theory this should be testable by using getComputedStyle() on various elements.
In practice I know Blink's UA stylesheet doesn't match the spec super-great. I can't imagine others do either (maybe Servo?). In a large part this is because some of the stuff the spec specifies in CSS is implemented in non-CSS ways. (See one attempt to align in https://github.com/whatwg/html/pull/2298; I should probably get back to that...)
Is this an area worth investing in tests for, and trying to align browsers?