Open LeaVerou opened 5 months ago
A few comments on the specifics above:
The constraints around calc()
or calc-size()
aren't specific to implementations; they're about the logic required by existing specifications (whether written like flex and grid and multicol or not-really-written like tables) having branches on value types, and web content depending on that logic.
:focus-within
and :target-within
are substantively different from :has(:focus)
and :has(:target)
in that they have different (and probably more desirable) behavior when crossing shadow DOM boundaries. (I'm also still not convinced :has()
was a good idea; I don't think the performance concerns about it were "solved", many of them were instead ignored.)
Also a few more general thoughts:
I think teachability of performance characteristics matters. While engines can and sometimes do optimize things, features of the platform do have real performance characteristics -- they require work that takes time. I think designing a platform on the assumption that authors never need to understand or be aware of its performance characteristics is a bad idea. Such a design is likely to lead to slower pages for end users.
I also think exposing things in a way that reflects how the underlying platform works is also (in general, certainly not in every specific case) a good idea. I don't think we should treat browser engines as deep magic that only a small group of people are able to understand. I think it helps with understanding how things are going to work when they're put together, understanding whether things can be put together, and understanding what new features can be made to fit in to the existing system.
And even more generally, I think the issues discussed here are to some degree specific to things that are not programming languages (for, I suppose, a very specific definition of a programming language that I'm confident is not universally agreed on). In a programming language, there is generally a concept of code execution where a piece of code is executed at a clearly defined time or times. Composition of pieces of code (such as variables or function calls) is managed by the author of the code, subjects to well-defined constraints such as those of a type system. Using the result of one function call as the input for another is allowed, and the author of the code is responsible for the performance characteristics of any such use when iteration or recursion are involved. That's not how CSS works; the values specified in CSS are used in very complex ways that are defined across many specifications and that already need to handle many property and value interactions. A CSS declaration is not a piece of code that is executed at a specific time.
I think there are arguments on both sides of the tradeoff of exposing how things work underneath (and helping users of CSS understand it) versus exposing things in a simplified way that covers up what happens underneath but is consistent with other features. How we should make that decision varies case by case, depending on things like how permanent the underlying characteristics are and how useful it is to expose them. I don't think it makes sense to document a principle to always fall on one side of this tradeoff. (I think perhaps you could even construe the extensible web manifesto as calling for always falling on the other side. I don't think that's the right answer either, though I think it makes good arguments that we should bias in favor of exposing primitives when it's not problematic to do so.)
I strongly disagree with this as a principle. I think every single example you've given here, while sometimes showing slightly awkward trade-offs, was ultimately decided correctly, and trying to instead reuse existing syntax in the ways you're suggesting would have been a very bad move. In general I agree with @dbaron here, in that this is something that needs to be decided on a case-by-case basis, and actually should usually lean in the opposite direction by default.
So, in general, I strongly disagree with this as a principle.
This most recently came up in https://github.com/w3ctag/design-reviews/issues/955 but it’s something that comes up a lot, especially in the CSS WG.
There is a widespread belief that when introducing a new feature, if you cannot support the full syntax for an existing concept, it’s better to introduce new syntax that makes the restrictions clear, rather than reuse existing syntax some forms of which are simply invalid.
Examples:
calc-size()
function to allow for intrinsic size keywords (auto
,fit-content
, etc.) to be used in calculations. The reasoning for not doing it incalc()
is that implementations cannot support more than one distinct keyword as they follow different code paths depending on the keyword used.:has()
, we realized we could do limited forms for specific pseudo-classes like:focus
or:target
. Instead of supporting:has(:focus)
and:has(:target)
, the WG opted for a completely separate syntax,:focus-within
and:target-within
. We were able to drop the latter since it had no implementations by the time we realized:has()
was feasible, but we will need to support:focus-within
forever. Some argue that there are still value in it, as it can be a faster code path, but if that’s the case, it’s exposing implementation issues as UI warts. Implementations can simply short-circuit:has(:focus)
rather than need a whole separate pseudo-class for this.Shadow DOM CSS especially guilty of this:
:host()
being a functional pseudo-class instead of allowing authors to simply concatenate selectors with:host
like they can do in every other selector scenario. E.g. they have to write:host([size])
instead of simply:host[size]
to target a host element that has asize
attribute.:host-context()
: A whole new functional pseudo-class to query the host element’s ancestors and siblings. E.g. authors need to write:host-context(.foo)
instead of (the far more idiomatic),.foo :host
.::slotted()
being introduced as a pseudo-element rather than a combinator because we could not support the entire selector syntax if it were a combinator. As a result, it suffers from several ergonomics issues that the web components community has been repeatedly vocal about.I’m sure there are a lot more.
There are two reasons I think this is an antipattern.
Usability
A language’s usability correlates strongly with having few primitives that can be combined in many different ways, rather than introducing new primitives for any new combination. Once authors learn about the primitives, they can guess the combinations, but new primitives need to be learned separately.
Furthermore, they will try the combinations anyway, and be surprised when they don’t work. This could be an argument for using new syntax (since reusing existing syntax with validity constraints means some forms of the syntax won’t work) but that is one step further in terms of probability that authors will hit it. E.g. in the
calc-size()
example above, it’s much more likely that an author will try to combineauto
withcalc()
than to combine multiple keywords withcalc()
.Evolution
Often limitations that appear intractable at the time, will be loosened or removed later on. If we’ve introduced new syntax to communicate the limitations, we’re stuck with it, and have to support it forever. If we’ve reused existing syntax that simply disallows some combinations, it’s trivial to gradually expand the range of things allowed. This also allows for a more gradual expansion, rather than the all-or-nothing of introducing new syntax that is designed around the current limitations.
Existing established primitives also work better with new features. To use one of the examples above, shadow DOM CSS is a mess when it comes to CSS Nesting, because CSS Nesting is designed around regular selectors, not selectors whose context is in a parenthetical pseudo-class.
Are there any examples of this beyond CSS?