w3c / csswg-drafts

CSS Working Group Editor Drafts
https://drafts.csswg.org/
Other
4.38k stars 642 forks source link

[css-nesting] Concern about combinatorial explosion #2881

Open upsuper opened 6 years ago

upsuper commented 6 years ago

The most intuitive way to implement nesting rules is probably just expand them internally as their complete form, just like what preprocessors do nowadays.

But this has a problem that it means the number of selectors to match can grow exponentially as the nesting get deeper. This is less a concern when it was handled by preprocessors, because authors would see the exponential result from the generated file. If they see a generated CSS sized hundreds of megabytes or even gigabytes, they know there is a problem. But with nesting rules, they can hand to browsers a CSS file with just a handful of kilobytes which can be expanded to millions of rules, that would be problematic.

Consider something like

.a1, .a2 {
&.b1, &.b2 {
&.c1, &.c2 {
&.d1, &.d2 {
&.e1, &.e2 {
&.f1, &.f2 {
&.g1, &.g2 {
&.h1, &.h2 {
&.i1, &.i2 {
&.j1, &.j2 {
&.k1, &.k2 {
&.l1, &.l2 {
&.m1, &.m2 {
&.n1, &.n2 {
&.o1, &.o2 {
&.p1, &.p2 {
&.q1, &.q2 {
&.r1, &.r2 {
&.s1, &.s2 {
&.t1, &.t2 {
&.u1, &.u2 {
&.v1, &.v2 {
&.w1, &.w2 {
&.x1, &.x2 {
&.y1, &.y2 {
&.z1, &.z2 {

It's only 300+ bytes, but can be expanded into 67M selectors. It's probably not good.

Potential solutions includes:

  1. restrict the maximum nesting levels (e.g. at most 3 levels are allowed)
  2. allow only a single complex selector, rather than a list in nested rules
tabatkins commented 6 years ago

Allowing selector lists is necessary; the combinatorial explosion is, in large part, the point of nesting.

I'm fine with allowing a max, tho 3 is a little low. Perhaps 5 or 6?

upsuper commented 6 years ago

I have a feeling that shortening selectors is also an important usefulness from nesting, so I don't completely buy that combinatorial explosion is the point, but I'm okay with that argument.

Do we have data on how many levels do people do nowadays with preprocessors? I assume two levels (one top level and one level nested) should cover majority of cases, and three levels should almost cover the rest. Do people use 5 or 6 levels a lot? With 6 level, you just need 10 selectors in each level to get 1M, which feels problematic still.

jonjohnjohnson commented 6 years ago

Even disallowing more than 6 levels won’t stop the possible explosion using large ‘:matches()’ lists at each of the possible 6 levels?

upsuper commented 6 years ago

I don't think anyone wants to expand :matches() at all, and not expanding it has a reasonable matching complexity as well, especially if we don't have to handle specificity. (The hard part of :matches() in implementation is invalidation, not matching.)

Nested rules are different. On the one hand, while a1, a2 { & b1, & b2 { } } is equivalent to :matches(a1, a2) :matches(b1, b2), a b { & c d { } } means just a b c d and nothing else, unlike :matches(a b) :matches(c d). On the other hand, for nested rules, the parent rule can appear in anywhere in child rule, for example you can say a b { c d & { } } which means c d a b (syntax is not going to be this way but that's the idea). This means rewriting nested rules into a single complex selector without repeating anything is basically impossible, so you would need to expand it anyway.

upsuper commented 6 years ago

(Specificity handling is also hard for :matches() in implementation I guess.)

tabatkins commented 6 years ago

Nested rules are different.

No, they're not, you're just desugaring wrong. ^_^ When desugaring, you replace the & with a :matches() containing the parent selector

(Specificity handling is also hard for :matches() in implementation I guess.)

No, we resolved to use the simpler version of :matches() specificity - it's the specificity of the most specific argument, regardless of matching.

upsuper commented 6 years ago

No, they're not, you're just desugaring wrong. ^_^ When desugaring, you replace the & with a :matches() containing the parent selector

So it's different from what preprocessors do currently? I don't think preprocessors are able to expand c d :matches(a b) to a selector pre-matches, are they?

Anyway, expanding them to :matches would only make them worse than :matches. Still, I don't believe you can expand any nested selector into a single complex selector. Consider for example, a b { & c d { } e f & { } }, what selector would you expand it to?

No, we resolved to use the simpler version of :matches() specificity - it's the specificity of the most specific argument, regardless of matching.

Oh, okay, that's great. Then :matches() is probably not very hard to implement.

tabatkins commented 6 years ago

I don't think preprocessors are able to expand c d :matches(a b) to a selector pre-matches, are they?

They certainly can, but it explodes combinatorially:

c d a b
c da b
c a d b
ca d b
a c d b

In cases like this (like Sass's @extend), they use some heuristics to avoid having to fully expand the possiblities, while making it highly likely that what they do expand to is useful, but that's not strictly necessary.

Consider for example, a b { & c d { } e f & { } }, what selector would you expand it to?

Just run the desugaring twice, man: e f :matches(:matches(a b) c d).

upsuper commented 6 years ago

Just run the desugaring twice, man: e f :matches(:matches(a b) c d).

That's different, no? In your expanded form it requires all of the a, b, c, d, e, f to appear in the path somehow, while in the original form, only a and b must appear, while either c and d, or e and f need to also appear.

tabatkins commented 6 years ago

No, all of them definitely need to appear, in exactly the relationship I described. What makes you think you can skip any of c/d/e/f? Imagine what elements are matched at each level.

upsuper commented 6 years ago

Are you sure? It's a b { & c d, e f & { } }. Shouldn't it be expanded to :matches(a b) c d, e f :matches(a b) according to your rule? Why do all of c/d/e/f need to appear?

Loirooriol commented 6 years ago

From my understanding,

a b {
  & c d { }
  e f & { }
}

should expand to

:matches(a b) c d { }
e f :matches(a b) { }

Sure, the parent part is repeated but there is no combinatorial explosion.

The problem seems indeed when & is used multiple times in an inner selector:

a1, a2 {
  & b1, & b2 {
    & c1, & c2 {}
  }
}

expands to

:matches(:matches(a1, a2) b1, :matches(a1, a2) b2) c1,
:matches(:matches(a1, a2) b1, :matches(a1, a2) b2) c2 {}

but then I think implementations should just not do this, and reuse the parent part without actually duplicating it.

upsuper commented 6 years ago

but then I think implementations should just not do this, and reuse the parent part without actually duplicating it.

I thought it is very tricky to implement nesting without actually expanding them. But after thinking about it further, I think there is a reasonably simple algorithm to implement this without expanding, so I'm going to close this issue.

I'm still a bit concern about its indication of promoting use of :matches, but that's probably a separate issue.

upsuper commented 6 years ago

After thinking a bit more, I think I was wrong in the previous comment, and I still cannot figure out a universal way to implement nesting rules efficiently without exponentially expanding the selectors.

If we can resolve to use one-element-a-time semantic in #2895, and also to require balanced nesting selector in #2896, and maybe to disallow :not() to contain &, there can be a decent approach to implement but I'm not completely sure.

upsuper commented 6 years ago

I guess anything in the latter half of the previous comment doesn't really matter. The only reasonable way to implement it seems to just be running matching exponentially... or alternatively, having a exponential memory usage and do stuff linearly.

tabatkins commented 6 years ago

Oh! I misread your rule, I didn't realize they were two rules nested in the single parent rule.

(Tangent: that's an invalid rule, btw. Can't have a rule like e f & {}, needs to be @nest e f & {}.)

So that expands to the two rules :matches(a b) c d {} e f :matches(a b) {}. Or if you wanted to use your comma-separated one from the last comment, :matches(a b) c d, e f :matches(a b) {}.

What did you think was complicated about desugaring that?

upsuper commented 6 years ago

I don't think it's complicated to desugar that. The problem is the number of selectors after desugaring grow exponentially and I cannot figure out a reasonable algorithm for matching which neither time nor memory is exponential.

tabatkins commented 6 years ago

It doesn't grow at all if you use :matches() internally, per the desugaring.

upsuper commented 6 years ago

Could you desugar the following selectors with :matches()?

a1, a2 {
@nest & b, b & {
@nest & c, c & {
@nest & d, d & {
@nest & e, e & {
@nest & f, f & {
tabatkins commented 6 years ago

The innermost rule desugars down to:

:matches(:matches(:matches(:matches(:matches(a1, a2) b, b :matches(a1, a2)) c, c :matches(:matches(a1, a2) b, b :matches(a1, a2))) d, d:matches(:matches(:matches(a1, a2) b, b :matches(a1, a2)) c, c :matches(:matches(a1, a2) b, b :matches(a1, a2)))) e, e :matches(:matches(:matches(:matches(a1, a2) b, b :matches(a1, a2)) c, c :matches(:matches(a1, a2) b, b :matches(a1, a2))) d, d:matches(:matches(:matches(a1, a2) b, b :matches(a1, a2)) c, c :matches(:matches(a1, a2) b, b :matches(a1, a2))))) f, f:matches(:matches(:matches(:matches(:matches(a1, a2) b, b :matches(a1, a2)) c, c :matches(:matches(a1, a2) b, b :matches(a1, a2))) d, d:matches(:matches(:matches(a1, a2) b, b :matches(a1, a2)) c, c :matches(:matches(a1, a2) b, b :matches(a1, a2)))) e, e :matches(:matches(:matches(:matches(a1, a2) b, b :matches(a1, a2)) c, c :matches(:matches(a1, a2) b, b :matches(a1, a2))) d, d:matches(:matches(:matches(a1, a2) b, b :matches(a1, a2)) c, c :matches(:matches(a1, a2) b, b :matches(a1, a2)))))

The higher rules desugar to subsets of that.

upsuper commented 6 years ago

That's exponential to the length of the selectors written above, isn't it? That one has only 6 levels, so you can still list it. If I wrote 26 levels like the previous comment, maybe your desugared rules would get rejected by GitHub as a comment :D

tabatkins commented 6 years ago

Sure. Very deep nesting of selectors is uncommon in the first place, tho (based on preprocessor experience), and deep nesting of selector lists is even less common. When things are deeply nested, they commonly just do appending, like .foo { & .bar { & .baz {..., which is easy to handle.

That said, I'm still okay with a maximum nesting depth set to something reasonable, like 5 or 6.

pygy commented 4 years ago

Wouldn't a limit on the total length of the expanded selector list make more sense?

Not that nesting 6 levels deep is wise, but a hostile style sheet could still do some damage by using a large lists of selectors at every level (oh, you've disabled JavaScript? How's that? [BAM, here comes a selector bomb that would have been unloaded synchronously by a JS script]).

tabatkins commented 4 years ago

The length of the selector list at a given level doesn't cause any explosion in the :matches()-based desugaring; the higher levels gets repeated more, but doubling the length of the top level just approximately doubles the length of the result. It's the nesting level that causes non-linear growth.

pygy commented 4 years ago

@tabatkins Cleanup edit, this went from a tingling feeling that something wasn't right, to a few false starts, to this which seems solid:

&, a &, & b, b & a, and variations thereof using ~, +, > and juxtaposition can still do some damage.

const nested = [...Array(5)].map(
  ()=> ['&', ...['', ' ', '~', '+', '>'].map(
    sigil => 'a &,& b,b & a'.replace(/ /g, sigil)
  )].join(','),
)
nested.join('').length // 335

nested.reduce(
  (r, v) => v.replace(/&/g, `:is(${r})`),
  'b, b b, b b b, b b b b, b b b b b, b b b b b b'
).length // 57,392,051

That's a sweet payout. Not sure if it can be made worse by the fact that combinators and nesting interact in non-trivial ways (i.e. not sure if b~&>a can be flattened with either b~&~a or b>&>a or if it adds to the non-linear expansion).

Contrast that with an iframe bomb that has a linear server cost.

argyleink commented 3 years ago

I could see capping at 6 deep being reasonable. I agree it's uncommon to see nesting deeper than that. Does seem like a limit should be present though, just like with custom properties and the billion laughs attack it can produce without the limit.

fantasai commented 1 year ago

Agenda+ to discuss if we want to:

Loirooriol commented 1 year ago

Following the same reasoning as in https://github.com/w3c/csswg-drafts/issues/3462, I'm fine with

tabatkins commented 1 year ago

Going ahead and tagging this for f2f discussion, as it seems good to hash this out.

bramus commented 1 year ago

For completeness, WebKit imposes a limit of 128 nesting levels: https://github.com/WebKit/WebKit/commit/c9ac90f8cbfa0ae54a296168fe0c517826ff6757

tabatkins commented 1 year ago

Lol that is way higher than the minimum I'd add, so that's fine. (I was considering, like, 10 or something.)

Loirooriol commented 1 year ago

@sesse Correct me if I'm wrong, but my understanding of https://github.com/w3c/csswg-drafts/issues/8310#issuecomment-1383810771 is that Blink doesn't expand & so it's not affected by this problem. Couldn't other implementations use the same approach?

mdubet commented 1 year ago

WebKit expands & to :is(parent_selector_list), which grows to something like O(average_number_of_nesting_selectors_at_each_level^depth)

Also, we indeed have a maximum rule nesting level, but not specifically for style rule nesting.

sesse commented 1 year ago

@Loirooriol I'm on vacation, but Blink does indeed store & as just a pointer to the parent selector (no combinatorial explosion), and I do not (personally) know of any reason why WebKit could not do the same.

Loirooriol commented 1 year ago

This testcase has 25 levels:

<!DOCTYPE html>
<script>
let t = performance.now();
</script>
<style>
& e1a, & e1b {
& e2a, & e2b {
& e3a, & e3b {
& e4a, & e4b {
& e5a, & e5b {
& e6a, & e6b {
& e7a, & e7b {
& e8a, & e8b {
& e9a, & e9b {
& e10a, & e10b {
& e11a, & e11b {
& e12a, & e12b {
& e13a, & e13b {
& e14a, & e14b {
& e15a, & e15b {
& e16a, & e16b {
& e17a, & e17b {
& e18a, & e18b {
& e19a, & e19b {
& e20a, & e20b {
& e21a, & e21b {
& e22a, & e22b {
& e23a, & e23b {
& e24a, & e24b {
& e25a, & e25b {
background: lime;
</style>
<e1a><e2a><e3a><e4a><e5a><e6a><e7a><e8a><e9a><e10a><e11a><e12a><e13a><e14a><e15a><e16a><e17a><e18a><e19a><e20a><e21a><e22a><e23a><e24a><e25a>
text
<script>
document.body.offsetLeft;
document.body.append(performance.now() - t + "ms");
</script>

Blink takes ~10 seconds, WebKit crashes after a long freeze. So Blink is better but it still seems affected by this problem.

emilio commented 1 year ago

Live test-case: http://crisal.io/tmp/lots-of-nesting.html

Firefox Nightly takes ~4s on my machine fwiw :P

Loirooriol commented 1 year ago

I was wrongly getting few ms in Nightly, I have moved let t = performance.now(); to the very top, then I get like 6s.

tabatkins commented 1 year ago

Note that Chrome has the same timing if each rule is a single selector wrapped in :is() (that is, :is(& e1a, & e1b) {...}, so the perf issues are introduced by something other than the branching itself. (This also means we can't base it on a complexity budget depending on the length of the selector lists.)

tabatkins commented 1 year ago

And Chrome has different timing if you collapse away the nesting. That is, :is(a, b) { :is(c, d) {...}} and :is(a, b) :is(c, d) ... {...} are at least a 10x difference in timing (with the 25 :is() functions, i got 0.8s vs 13s on my laptop). So it's still slow, but there's something else also contributing to the blowup.

SebastianZ commented 1 year ago

Just a tangential side note on this: When UAs hit a selector/nesting limit, their DevTools should give the authors a hint noting that.

Sebastian

FremyCompany commented 1 year ago

And Chrome has different timing if you collapse away the nesting. That is, :is(a, b) { :is(c, d) {...}} and :is(a, b) :is(c, d) ... {...} are at least a 10x difference in timing (with the 25 :is() functions, i got 0.8s vs 13s on my laptop). So it's still slow, but there's something else also contributing to the blowup.

That seems to indicate that there is a design choice in Chrome's implementation that is causing this to be an issue at levels lower than it could in theory. Maybe once that root cause is identified, it would be possible to agree on a very large limit and call it a day.

Also +1 that maybe a warning in the DevTools would go a long way in discouraging author abuse, but if the concern is malicious code injection (to trigger DoS), that doesn't solve the problem.

css-meeting-bot commented 1 year ago

The CSS Working Group just discussed [css-nesting] Concern about combinatorial explosion, and agreed to the following:

The full IRC log of that discussion <bramus> TabAtkins: depending on your implentation strategy for nesting and possibly all times, there can be an issue with depely nested style rules
<bramus> … particularly if you have selectorlists
<bramus> … oriol posted an example at the end
<bramus> … crashed webkit and makes style resoltuion take 10s on blink
<astearns> https://yarusome.github.io/box-shadow-proposal/
<matthieudubet> q+
<bramus> … we previously talked about a possible limit
<bramus> .. in practice 3 is a reasonable limit. ppl dont go much further than tat
<bramus> … sass did a survey on it too
<astearns> https://github.com/w3c/csswg-drafts/issues/2881#issuecomment-1642450622 for the latest test
<bramus> … we should spec some reasonable minimum to require and to let browsers have an implementation defined nesting depth
<bramus> … webkit has a limit of 128, but as you can see you can trigger problems much earlier
<bramus> … suggestions to support at least 10 limits, to err on the safe side of what ppl use
<bramus> … max can be higher than that
<fremy> q+
<plinss> q+
<Rossen_> ack matthieudubet
<bramus> matthieudubet: issue is not surely with depths but length of selector
<bramus> … depth limit is easier to explain
<bramus> … real issue here is ???
<bramus> s/???/is the lengths of the context selector
<bramus> … if we put limit, we can put limit on the ??? cotext selector
<bramus> … not depth per se
<bramus> … suggesting to put limit of length on selector list and length of context selector
<emilio> q+
<bramus> TabAtkins: and i think limit would not be a particular complex selector, but in product of nested selector lists
<bramus> TabAtkins: example with 256 * 256
<bramus> … product of lengths of sel lists seems fine
<bramus> … whichever seems more reasonable to impls
<bramus> … we can express a limit that is higher than we expect to see
<bramus> matthieudubet: should we define how we would fail?
<bramus> … support until some depth?
<bramus> TabAtkins: sometimes we do and sometimes not
<bramus> … in this case might make sense to ignore reules that were nested too deeply
<fremy> q-
<fremy> matthieudubet addressed all my points, I think
<Rossen_> ack fantasai
<Zakim> fantasai, you wanted to react to matthieudubet
<bramus> fantasai: if we do a limit on depth, we might all want to have the same limit across vendors
<bramus> … we need to have a min depth that is sufficient for authors
<Rossen_> ack plinss
<bramus> plinss: +1 on limit consistency
<bramus> … concerned about when authors hit it: are they gonna know?
<bramus> … als concerned about of performance impact when we have a reasonable limit. limit can be ??? than we have right now
<bramus> … other issue - dont want to derail - but might make explosion problem worse so will keep for later
<bramus> Rossen_: please introduce
<fantasai> Losing a few rules that are just past the limit in some browsers but not others would be a tle problem that could really break things
<bramus> plinss: when the parent rule has commas with multiple lists, the nested rule is prewrapped in :is() and that breaks the cascade
<fantasai> So we should have interop on nesting depth support if it's not something so far beyond what authors would use that no one will run into it
<bramus> astearns: there is an issue for that
<bramus> emilio: i dont understand
<bramus> TabAtkins: :is() doesnt reflect specificity
<Rossen_> q?
<bramus> plinss: I expect two rules that are nested cascade with different specificyt level,. with :is() you dont have that
<bramus> … explosion is worse than we think it is, if we fix that
<Rossen_> ack emilio
<bramus> emilio: im ok with a limit
<bramus> … i dont think a max should be specced
<bramus> Rossen_: that is what tab suggested
<bramus> … we did sth similar for units way back
<bramus> TabAtkins: but elika was saying we should spec a max
<bramus> fantasai: if the limit is sth low, then it is reasonable atuhors will hit that.
<bramus> … some browsers might then drop rules
<bramus> Rossen_: no, this is about a lower limit
<bramus> fantasai: if the upper limit is far enough – e.g. 100 – only some test cases or autogenerated stuff might hit it
<bramus> … this proposal is that min is 10, but that would mean max can also be 10
<bramus> … min should therefore be far enough to not cause interop prob
<bramus> emilio: we dont have limit of length of selectors.
<bramus> … it all depends on what you are nesting
<bramus> … single selectors nesting is probably fine
<fremy> q?
<fremy> q+
<florian> q+
<bramus> … if you nest lists of 100 selectors each, that might blow up soon
<bramus> fremy: nesting limit does not seem the right metric
<bramus> … correct midway is to see how much selectors you end up with
<bramus> … e.,g. you can have up to 256 if it is expanded
<florian> q-
<Rossen_> ack fremy
<bramus> … better to limit total num of selectors
<bramus> TabAtkins: wont work
<bramus> … at least in blink, if you take oriosl example and wrap it in :is(), you still have a probleme
<bramus> … it is combinatorial in some more ephemral notion of complexity that i do not want to define
<bramus> fremy: that is different
<bramus> … if you put in :is() you only have complexity in running the code, not storing it
<bramus> TabAtkins: blinks performance is the same for with as without
<bramus> … reason we have 10s recompute here is based on sth more intrinsic about complexity of the selector, not the length of the list
<bramus> … a expanded selector here is still 125.
<bramus> … complexity here is not sth we can easily define in terms of any selector metric
<bramus> matthieudubet: if you count list inside of parent list, the time is linear to.
<Rossen_> q?
<bramus> … (missed)
<bramus> .. limit i am suggesting should be related to length
<bramus> TabAtkins: webkit will delay for longer than 10s right now. chrome and firefox have similar timings
<bramus> … but also def of 'what things to expand' is already a number of things: :is(), :where(), :nth-child with of
<bramus> matthieudubet: yes, it is more complex that depth
<bramus> … depth limit of 10 might not be enough
<bramus> TabAtkins: 10 is more then enough for virtually all cases (based on sass survey)
<ntim_> q+
<bramus> … somebody might go over, but in ppl tend to stick below that
<Rossen_> ack ntim_
<bramus> ntim_: i can see limit of 10 being too low, e.g. in build systems
<bramus> myles: what are the curernt limits?
<nicole> q+
<bramus> TabAtkins: none, there is no spec limit.
<bramus> florian: in implementations the run out of memory or crash
<bramus> emilio: yeah
<myles> q+
<Rossen_> ack nicole
<bramus> nsull: can we get data from sass (or some other party) about the number of nmesting authors do?
<bkardell_> +1
<bramus> miriam: sass does not track that
<bramus> … people nested really deep at the start, but best practice now is 3 but ppl dont always follow that
<bkardell_> surely we can grep github for scss stuff
<bramus> … web archive might have data on that
<bramus> nsull: intersting. what is average selector length?
<bkardell_> ah yeah, good call...
<bramus> Rossen_: if we dotn have the data, we should explore that offline
<bkardell_> q+
<Rossen_> ack myles
<bramus> myles: seems like the limits are purely mechanical
<bramus> … why is this different than things like total number of nodes or nested iframes?
<bramus> TabAtkins: there is a reason why you would want to limit here
<bramus> … if qty is roughly linear with amount of stuff in the page it might be ok
<bramus> … but if its exponentatial we are concerned about it
<bramus> … we have a limit in variables for example
<bkardell_> q-
<bramus> … to prevent doubling in 30 stages before going OOM
<bramus> … this falls in similar case. you dont want ppl to be able to crash a page by writing a crazy selector
<bramus> myles: I understand this is easier here, than a bunch of nested iframes
<bramus> … reason othe rlimits are not standard is bc ppl will not hit them
<bkardell_> q+
<bramus> … are you, tab, saying that reason that limit is needed here bc we expect ppl to make real pages and hit it?
<bramus> TabAtkins: potentially. 3rd party CSS could DOS your page, which is not ideal
<nicole> myles said there are limits on DOM size or number of nested iframes that aren't standardized because they are high enough authors don't hit them
<Rossen_> ack bkardell_
<bramus> bkardell_: 3rd party thing is a little bit tricky
<bramus> … bc 3p stuff could ruin the whole internet
<TabAtkins> i don't mean "a professional company giving 3rd party style", I mean "user-controlled"
<bramus> … i have not seen a ton of sass, but can say that in the stuff i have seen like 4-6 depths is what was there
<bramus> … am happy to help look up the actual numbers
<Rossen_> q?
<bramus> TabAtkins: we do want to have a min limit so that users have an expectation of what is usuable.
<florian> q+
<bramus> … in case of browser limits get cut low
<fremy> q?
<bramus> … we rarely put a max limit
<fremy> q+ (proposal)
<bramus> … but proposed resolution is to add a min limit
<fremy> q- (proposal)
<fremy> q+
<bramus> … it seems like a moderate depth to support
<bramus> … authors might run into a cutoff after that, and uas should make sure pages remain responsive in that case
<bramus> Rossen_: we seem to be circiling back
<TabAtkins> s/moderate depth/moderately excessive depth/
<bramus> florian: for max limit, the can then choose some metric they want?
<bramus> … are browsers likely to create a limit that uathors might hit?
<bramus> … we can also say a limit is implementation defined
<bramus> TabAtkins: analogous situation is size of grids, where we knew that relatively large grids were larger than the limit we had
<bramus> … i am fine with saying it is impl defined
<astearns> implementation-defined and wait for bugs to come in?
<plinss> q+
<bramus> … authors must in that case not do stupid stuff
<Rossen_> ack florian
<bramus> … so i am happy with close no change
<bramus> … currently there is nothing in the spec about it
<Rossen_> ack fremy
<bramus> fremy: quick idea: in the example you can get to 7mo selectors. can we compute the length of th selectors like string length and compute product of that and cut off at a limit?
<bramus> … to prevent expo scenarios
<florian> s/for max limit, the can then choose some metric they want?/for max limit, the can then choose some metric they want, not necessarily depth/
<bramus> TabAtkins: I dont want to get into that complexity bag, and let implementations figure it out
<bramus> fremy: I am proposing to use the actual string length of a selector
<bramus> .. without considering how complex it is
<bramus> … e.g. 1000 chars
<bramus> TabAtkins: problem case here can be constructed in a few 100 chars
<Rossen_> ack plinss
<bramus> plinss: if we allow implementers to choose a limit, then we must set a minimum and have some advice to auhtors about that
<bramus> … to prevent interop issues
<bramus> Rossen_: so back at the same proposal :)
<florian> s/might hit?/might hit? If not, setting a minimum around 10 isn't going to constrain anyone
<bramus> TabAtkins: 2 proposals:
<bramus> … - close no change
<bramus> … - or a depth specified min
<bramus> myles: and a third to pick 10?
<bramus> TabAtkins: that is option 2
<bramus> Rossen_: we seem to come back to option 2
<SebastianZ> I'd vote for option 2.
<argyle> nesting depth `clamp(0, 10, 🤷‍♀️)`
<bramus> fremy: based on tabs remarks option 2 makes no sense? with just adding :is() you can get the same complexitiy in a nmormal selector, so why limit depth in nesting?
<bramus> Rossen_: that is separate issue, no?
<bramus> fremy: it is not
<bramus> astearns: but maybe we should
<florian> q?
<bramus> TabAtkins: it is not just is, is, nthchild, where
<bramus> florian: given that this is a case, should we do nothing nowhere?
<bramus> fremy: that is my point. id rather go with option 1 than 2, as you can do it somewhere else
<bkardell_> q+
<fremy> +1 on Rossen_ proposal to add a recommendation for nesting depth
<bramus> Rossen_: (goes over options again) but happy to go with 1
<SebastianZ> q+
<TabAtkins> Interestigly, if you take Oriol's example, wrap each selector in an :is(), then collapse away the nesting to make it a single selector with the same beahvior, we get substantially faster perf in chrome
<bkardell_> q-
<bramus> plinss: i dont see point in author guidance about exceeding a max, when there is no min in spec
<TabAtkins> (~0.8s on my laptop, vs 13s for the nested version)
<florian> q+
<bramus> Rossen_: (missed)
<bramus> Rossen_: i appreciate the pushback, but dont see how any author guidance for that logic could turn into formal vendor reqs
<bramus> plinss: if we dont have min and give author guidance, that number is ???
<fantasai> +1 plinss
<bramus> Rossen_: perhaps for some browser on some platform
<fantasai> s/???/fiction/
<bramus> plinss: then spec it as min
<Rossen_> q?
<bramus> TabAtkins: even if we consider that. even a small depth might trigger same problem. not certain if that is case or not. there might not be any reasonable metric to use, but reasoanble author behavior with some guideline will keep ppl in bright space of good perf.
<bramus> … trying to lay down a strict limit might not give us the guarantuee of good perf
<emilio> q+
<fremy> @ TabAtkins, if the :is is faster, then your original intuition about the slow down root cause might not be correct, so the issue might not be related to the number of selectors
<matthieudubet> q+
<bramus> … so “you should think about >” might be a good thing
<fremy> @ TabAtkins, I guess maybe we should investigate a bit more
<nicole> q+
<TabAtkins> right, i'm wondering if it's just something about the length of the nesting entirely
<Rossen_> ack SebastianZ
<bramus> SebastianZ: i think a min limit would be good, but
<bramus> .. it is not limited to nesting but also selector complexity so maybe selector spec should also mention that?
<Rossen_> Zakim, close queue
<Zakim> ok, Rossen_, the speaker queue is closed
<bramus> TabAtkins: yeah
<bramus> SebastianZ: if we can come up with a metric, it should be in the selector spec
<Rossen_> q?
<bramus> emilio: since this is related
<bramus> … same case in variable, and vendors have different limits
<bramus> … string based, token based, …
<bramus> … it dont think limit is easy to define
<bramus> … perf is based on impl details
<bramus> … complexity depends on wether elements match too
<bramus> … i would want to leave this undefined until we have decent understanding of these performance characteristics
<plinss> +1 emilio
<emilio> ack emilio
<bramus> Rossen_: by undefined you mean explicitly undef?
<bramus> emilio: yes
<Rossen_> ack emilio
<Rossen_> ack florian
<nicole> +1 emilio
<bramus> florian: I agree, but also give authors some guidance as well
<fremy> +1
<bramus> matthieudubet: as tab said, it does not depend on depth but length of selector which is easy to define
<bramus> … that are the limits we are hitting. not depth, length of selector
<florian> s/guidance as well, but not in terms of max levels, since that's the wrong metric. Rather: "be mindful of combinatorial explosions. For instance {this example} crashes browsers. Don't do this"
<bramus> TabAtkins: a quick test case shows it is not the case
<Rossen_> ack matthieudubet
<bramus> … e.g. 25 :is() is slow
<florian> s/guidance as well/guidance as well, but not in terms of max levels, since that's the wrong metric. Rather: "be mindful of combinatorial explosions. For instance {this example} crashes browsers. Don't do this"
<bramus> matthieudubet: i agree it might be too complex to put into a spec
<Rossen_> ack nicole
<bramus> nsull: agree with emilio and myles
<bramus> … we dont know how devs will use this
<bramus> … making up guidance now seems too premature
<bramus> … so I support option 1, and see what actually breaks now
<bramus> … and we might revisit guidance later
<bramus> myles: I was not stating, but asking for clarification
<astearns> ack fantasai
<bramus> Rossen_: seems like there is enough support for option 1 to keep it explicitly undefined
<bramus> … it is important that impls not crash though
<bramus> … can we resolve?
<TabAtkins> Also I misremembered, Variables does *not* impose an explicit limit, it just calls this out as an impl issue.
<bramus> plinss: small objection, as option 1 was 'close no change' and wait for data
<bramus> TabAtkins: I am fine with 'close no change' without prejudice
<bramus> … untill we hit a problem in the future
<bramus> ntim_: we need to figure out how nesting is used in the wild
<bramus> Rossen_: so are you signing up?
<bramus> ntim_: no
<bramus> TabAtkins: bug trackers will find the data, and then we can file a new issue based on that
<florian> +1
<bramus> … until we have practical proof, we dont need to hold this open
<bramus> iank_: we had a limit for grid tracks, and we received a bunch of bugs for it. so ppl file them
<bkardell_> Can I ask for one clarification? Is option #1 also put no author guidance/information about preventing problems into the spec?
<bramus> ntim_: to prevent interop issues, we need the data
<bramus> TabAtkins: and the bug reports will surface that
<bkardell_> Rossen_: ^ can we clarify that?
<bramus> Rossen_: lets close here
<TabAtkins> this issue has no bearing on author guidance in the spec
<TabAtkins> I'll put some in.
<bramus> … do we want to resolve on 1: explicitly define that the limit is (pause) undefined
<bkardell_> if tab will put some explanation in, I can +1 on option 1
<bramus> Rossen_: objections?
<bramus> plinss: no
<bramus> Rossen_: explicilty undefined as in “no limit”
<TabAtkins> i'll put in some guidance similar to this https://drafts.csswg.org/css-variables/#long-variables
<bramus> plinss: we will have answer when research is done
<bramus> Rossen_: objections?
<bramus> florian: saying explicitly that there is no limit, you require ???
<bramus> TabAtkins: no, not 'no limit' but undefined
<bramus> … proposed resolution: nesting limit is UA defined
<TabAtkins> proposed resolution: limits on nesting is ua-defined
<bramus> RESOLVED: limits on nesting is ua-defined
mdubet commented 5 months ago

We already have this kind of "not obvious for author" exponential expansion with --var. The css-variables spec warns about this, maybe the css-nesting spec could have a similar paragraph.

To avoid this sort of attack, UAs must impose a UA-defined limit on the allowed length of the token stream that a var() function expands into. If a var() would expand into a longer token stream than this limit, it instead makes the property it’s expanding into invalid at computed-value time.

https://drafts.csswg.org/css-variables-2/#long-variables