w3c / csswg-drafts

CSS Working Group Editor Drafts
https://drafts.csswg.org/
Other
4.44k stars 657 forks source link

[css-cascade-6] Should the scope proximity calculation be impacted by nesting scopes? #10795

Open mirisuzanne opened 3 weeks ago

mirisuzanne commented 3 weeks ago

Background:

The published definition of 'scope proximity' states that:

If two declarations both have elements selected by scoped descendant relationships applying weak scoping proximity, then the declaration with the fewest generational hops between the ancestor/descendant element pair wins.

If multiple such pairs are represented, their weak scoping proximity weights are compared from innermost scoping relationship to outermost scoping relationship (with any missing pairs weighted as infinity).

However, in the Editor's Draft, the second paragraph was removed and the first paragraph adjusted, so that each scoped selector has one single scope root and a single proximity number.

In our publishing discussion last week, @mdubet asked to reconsider this.

How it might work:

In order to find a 'proximity', we need both a 'subject' element and a ':scope' element. Then we count the 'steps' between one and the other.

Nested @scope rules are allowed. Each scope rule's <scope-start> selector is 'scoped' by the parent scope rule. If we want scopes to accumulate with nesting, we have to determine which subjects we are comparing to which roots. Given this example:

@scope (a) {
  @scope (b) { 
    c { /* … */ }
  }
}

I see two options (though I believe they might be functionally the same??). The scope proximity weight for c is one of:

In either case, I believe the proposal is to compare proximities from inner-most to outer-most.

But why?

I think this would be a reasonable approach. At least, it makes some sense to me that things might work this way. But I can't think of an actual use-case where I would rely on this behavior. I'm not opposed, but I'm also not sure how useful or complex it is.

mirisuzanne commented 2 weeks ago

Thinking through it a bit more, and discussing with @argyleink, I don't really have any reason not to do this. The alternative is falling back on source order, which isn't better than multi-step proximity.

And I believe there's no difference between the two approaches above. The distance between two roots will always be equivalent to the additional distance between a subject and ancestor root. So the approach seems straight-forward.

So unless there's push back from other implementors (@andruud made the initial change here?), I'm going to propose we resolve on @mdubet's proposal here. Marking this as agenda+ to try and get that resolution.

andruud commented 2 weeks ago

@mirisuzanne In other words, this would introduce a dynamic number of cascade criteria (for the first time)? A bit like specificity, but instead of (A,B,C), it's a variable number of components.

But why? [...]

Last time this came up, we concluded that: 1) it adds complexity (both for impl and authors' mental model), and 2) it's not useful. Your answer to this question suggests that nothing has changed. Therefore, I do oppose this change, as it seems to (at best) only be about theoretical purity at the expense of other things.

I'm also not sure how [...] complex it is

We'd ideally investigate that a little bit before making any moves spec wise. @scope also shipped a long time ago in Blink. I would need to be able to prove that we even can ship such a change without breakage. Otherwise, we might end up with subtly different cascade behaviors forever, which is worse than just aligning on the current spec.

I'm going to propose we resolve on @mdubet's proposal here

We should minimally first answer the "But why?" with an actual answer, and explain why the more complex behavior is useful after all.