Closed plinss closed 1 year ago
I think requiring authors to type &:is(div *)
is dramatically worse than :is(div) &
, fwiw. ^_^ It's certainly not easier to explain the transform necessary there. It's also certainly a worse interim option than :is(div) &
in terms of "weird cruft that'll get leftover in code during the interim", which Alan wanted to avoid in the first place.
And again, the only reason to avoid things like .foo &
, which the current spec allows and which has zero cruft in our ideal future, is if we think that we're going to throw out the last year of discussions and start with some new approach. We don't have any reason to believe that's the case (and I'm strongly opposed to it).
The other reason to postpone .foo &
for now is so we do not have to explain that you can use that simple form for .foo
but not for div
.
As I have said before, my preference would be not to ship anything yet. But if something must be shipped I think it is reasonable to explore what subset would work best.
I find it incomprehensible that :is(div) ...
and .foo &
should be disallowed (or postponed) on the grounds that we don't know whether infinite lookahead is possible or not, because it makes no difference either way.
If infinite lookahead is impossible then :is(div) ...
and .foo &
are viable indicators of a selector.
if infinite lookahead is possible then :is(div) ...
, .foo &
and div
would be viable indicators of a selector.
What is gained by forcing authors to apply a prefix onto all nested selectors?
Given the speculation about my position it's probably worthwhile for me to be a bit more clear where I stand.
All of the work on nesting up until recently has been done under the presumption that adding lookahead to the parser couldn't be done. To this end there has been a lot of work trying to find a compromise syntax that serves everyone's needs. There has obviously been a lot of work and a lot of creativity put into this effort. I'm not belitting that in the least, but at the end of the day the outcome is that we've fairly well determined that there isn't a solution without lookahead that meets all the requirements. At best we have a compromise solution that I've heard described as "the least worst solution" (which I actually disagree with, but more on that later).
The one thing we haven't spent any energy on (until the last couple of weeks), is exploring the solution of 'simply' relaxing the lookahead restriction. I feel there's broad consensus that doing so does allow us to deliver a nesting syntax that is an optimal solution and makes everyone happy, especially authors who are used to SASS.
The lookahead restriction is over 25 years old at this point, and serves no purpose except parsing perfomance. I worked on Gecko's first CSS parser and frankly I wondered at the time if that restriction was actually necessary at the time. However, it also wasn't in the way, so there was no need to challenge the assumption. There were also very strong concerns about CSS as a whole being implementable at the time. No browser had yet shipped a confomant implementation of CSS1, and rembember, we were working on ~100MHz 486 processors. The consensus within Netscape at the time was that CSS1 selector matching could never be made performant, so anything that impacted performance in CSS was treated with extreme suspicion. We obviously proved that consensus as wrong at the time. And since then we have also proved other 'performance killing' features like :has()
turned out to be doable after all.
I strongly believe that lookahead will prove viable, and the work done on #7961 has identified several optimizations, making it even more likely.
My threat of a Formal Objection stems from the current desire to adopt and ship the compromise solution, without having properly explored and ruled out the optimal solution. I feel this would be a mistake and is not an example of the WG using the proper process to find the ideal solution.
So yes, if we try to finalize on option 3 as it stands, before knowing if the lookahead solution is actually possible or not, I will file an FO.
Secondly, I have issues with option 3 as currently defined, specifically the feature where prefixes to nested rules are optional if the rule doesn't start with an identifier or function.
If the lookahad approach proves viable, these issues are moot.
I have two technical resasons to object to that feature:
1) It is an author foot-gun. The intent of the feature is to allow author to copy/paste rules into nested context without modification. However, authors will have to carefully review those rules for those needing modification, by either a prefix or adding an otherwise unnecessary pseudo-class (which by any definition is a hack). I believe this feature causes more harm than good for this reason alone. Backing up that opinion, we have poll data from authors stating that over 50% of them will not use this feature. This is not an author-friendly feature and violates the proprity of constituencies.
2) It narrows the scope of possible future extensions to the CSS language. I'm well aware that some people think the narrowed scope is less of a problem than I do, but regardless, it is something that needs to be taken into account. I do not believe we have done so properly. Furthermore, given the limited utility of the feature, I feel benefit is too small to justify the cost.
I also have a process reason to object. The path to getting us where we are has been to narrow down a broad set of possible solutions. I believe that each decision along that path was made in good faith and with good intentions, however, not every one of those decisions was made with a complete understanding of the big picture. Specifically, the language restrictions were not made clear to many of the WG particiapants until very late in the process, and were not part of any of the public polls. Also, many approaches were ruled out based on opinions, without having fully explored the alternatives, so many of these decisions were made with incomplete information.
Furthemore, the last two times we made major desicions about nesting, it wasn't on the agenda. Adding last-minute discussions is fine, but it's not a good way to make major decisions. People did not have time to properly review information, and we have no idea how many people weren't present because they simply didn't know we were going to discuss this. For example, I looked at the agendas and almost didn't attend either meeting.
To this end, should lookahead prove to not be viable, I feel it is necessary to revisit the alternative approaches, taking the larger picture into account, and see if one of them isn't a better solution overall. I have heard too many times, "we already ruled that out" as an excuse to close down discussions that should have been had.
So, yes, if lookahead proves to not be viable, and the WG resolves to ship the current syntax as-is, without re-opening discussions on alternatives presenting all the possible options with a list of all of their costs and benefits, I will file an FO.
Given these explanations, I hope it's obvious that my threat of an FO isn't "stop energy" or me simply trying to bully the group into accepting my preferred solution because I dislike where we are. I genuinely feel that our normal process and work mode has broken down and we need to take a breath and reassess.
There are several paths forward that will not result in me filing an FO.
1) We simply wait a few weeks and get real-world data about the feasability of adding lookahead. Should it prove viable, we're done (modulo the orthogonal issues which I hope we can resolve in the meanwhile). This is my preferred approach.
2) Should the timeframe to evaluate lookahead prove to take "too long" (with some justification of what that means), we can ship a limited version of nesting without the contentious feature. To this end there have been a few proposals (note that the last two these are compatibe with being morphed into other approaches making the prefix optional):
a) Require a `&` somewhere in the selector. I feel this is a non-starter because it requires lookahead.
b) Require a `&` prefix on all nested rules and no other use of `&` within the rule. I'm not thrilled about this, but it woulnd't trigger an FO on my part. It can be fixed later if needed.
c) Require some other prefix, like `@nest` or simply `@`. This would be my preferred interim approach as the prefix can be made optional if lookahead ships later, and avoids the other issues if not. I really don't see `@ div` as any worse than `& div`.
3) Should lookahead prove non-vaible, we reasses and make a proper consensus decision, reevaluating alternative approaches. If we do this, and everyone acts in good faith, and we still come back to the current version of option 3, I will not file an FO.
So I'm adding an Angeda+ label and request we take up the following on a near-term call:
1) Recind the resolution made on January 11. I argued against that resolution at the time and still believe it serves no purpose other than to act as fuel for bad-faith arguments and shenanigans like this.
2) Resolve to wait for real data about the viability of lookahead before making decisions to adopt a compromise syntax.
I also want to make clear that the TAG has begun a review of Nesting. The TAG has had discussions, but due to scheduling constraints have not yet had a quorum to make a consensus decision. However, all of the TAG members who have discussed this so far are in agreement that the best path forward is to explore lookahead. There also has been no dissent on the viewpoint that the current approach is an author foot-gun and some kind of mandatory prefix, or alternative scope for nested rules, would be a superior approach for authors.
So in addition to my FO, be aware that it's likely the TAG will not have a favorable review of the design as it stands.
Chrome is shipping nesting in version 112 in ±3 weeks. Are there any updates on this issue, the research in Gecko, ...
Chrome is shipping nesting in ±2 weeks. Any chance of getting an update on this?
@plinss given https://github.com/w3c/csswg-drafts/issues/7961#issuecomment-1489883575 and the upcoming breakout session on nesting next week, can we reduce the scope of this issue before the call? I’d like to hash out as much as we can here in the issue ahead of time.
I'll let @plinss make the final say, but as far as I can tell the objections raised in the original comment are totally resolved by #7961. (This is the "Option 1" listed in the original comment.)
I can't say #7961 'totally' resolves my objections. But it does remove many of my issues with option 3, and I hope that option 3 is now formally off the table.
The look-ahead approach, while much better than option 3, still has some issues, namely:
While none of these are show-stoppers in an of themselves, we still have the alternative of using an at-rule (several of the proposals so far can even be supported in parallel). The advantages of that approach are:
The disadvantages are:
I still want the WG to take a step back and consider both approaches with fresh eyes, evaluating all the advantages and disadvantages of each. Provided we do that and come to a clear conclusion, I won't object to either path.
FWIW, my vote is still for an at-rule approach. It's not as pretty, but it's safe and easy to explain with no surprises down the road.
Also, regarding the lookahead approach, I just have to say: https://www.youtube.com/watch?v=xSTN3mHEAOA
@plinss while I have no love for the warts in option 3, I don’t think we are in a situation where we can take it off the table. I think it is much more likely that we have to accept the reality that it shipped (too hastily, before we had clear answers for feature detection among other issues, and I think this will need it’s own section on the CSS Mistakes wiki) and that the restart look-ahead plan merely allows us to improve the syntax in the near future.
You said above that
I'm also fine with shipping option 3 as-is, as soon as we know for a fact that look-ahead is viable, as an interim step until full lookahead can be implemented, because all the down-sides are temporary. (I personally don't think it would be worth shipping, but I wouldn't object to it.)
Are you now walking that back?
No, I'm not walking that back. The point of that remark was that if we found out that look-ahead was viable, but would take time to implement, then I'd be ok with option 3 as a short-term, interim step.
Now that we not only know that look-ahead is viable, but we already have an implementation of it, I see no reason for option 3 to remain in the spec. The fact that vendors have actually prematurely shipped a non-standard implementation is irrelevant. There's no reason to spec option 3 behavior, their existing implementations can remain non-standard.
So, yeah, I believe we can, and should take option 3 off the table and remove it from the spec now, one way or another. It can be replaced with either the look-ahead approach or an at-rule based approach pending the above mentioned discussion we need to have.
I agree, we should just switch the spec to the new strategy. I don't see any reason to keep Option 3 in there; it'll just be a speedbump in a few browsers' releases, like many other APIs that had some minor changes after their initial shipping.
The CSS Working Group just discussed Unknown
.
@jensimmons on the call you expressed concern about the "just switch the spec to the new strategy" option, preferring to keep option 3 in level 1 and creating a new level for the lookahead version. I am wondering what we would gain from that.
As far as I understand, the additional syntax requirements in option 3 would still work with the look-ahead approach (they would just no longer be required). So an option 3 implementation would still conform, just not be complete. This seems good to me - ending up with a specification that addresses more concerns than the current draft spec does.
We do at times have to spec a reverse-engineered version of what is required for web compatibility, but this should be a last resort. I do not think we are at that point yet with this very new feature.
Could you go into more detail about your concerns here in the issue before next week’s breakout?
Has issues with error recovery. Since declaration recovery and rule recovery differ, unrecognized rules in declaration blocks will eat the following declaration. This will surprise authors.
This is not the case. Unrecognized rules will stop at the end of the rule (the }
), so a following declaration (or rule) will be parsed as normal.
This is not the case. Unrecognized rules will stop at the end of the rule (the }), so a following declaration (or rule) will be parsed as normal.
That might be true in a browser that implements nesting, and manages to detect the rule as a rule and not fall back to treating it as a declaration.
Try this in a browser that doesn't support nesting:
<style>
p {
color: red;
span { color: yellow; }
color: green;
}
</style>
<p>This should be GREEN</p>
or this in a browser that does:
<style>
p {
color: red;
custom-element:unknown { color: yellow; }
color: green;
}
</style>
<p>This should be GREEN</p>
(and if that works, imagine custom-element
conflicts with a future property name)
Try this in a browser that doesn't support nesting:
Yes, your stylesheet will be incredibly broken if you try and use Nesting in an older browser. Eating the following declaration is the least of your worries, large portions of your sheet simply won't parse.
or this in a browser that does:
Yeah that's fine, it'll parse the rule and stop. (In older browsers it'll eat the next declaration.)
If the element name was a valid property name, that's also fine. We'll try to parse it as a declaration, declare it invalid, then restart and parse as a rule, stopping at the correct place.
The only problem is if we somehow create a property whose name+value looks like a valid rule followed by a different declaration. This is trivial to avoid with a completely reasonable design restriction (if we ever allow a top-level {} in a property value, only allow it at the whole property value; if we're careful with the grammar we can allow stuff before it), but even without that restriction it's honestly hard to imagine a syntax we could ever design that would actually confuse the parser here.
The one case we have to be careful is with custom properties, since we can't control what they're doing, but that's also fine in practice since custom elements can't have dashed-ident names.
I documented all of this in https://github.com/w3c/csswg-drafts/issues/7961#issuecomment-1490495722. The strategy we're advocating here is very robust, actually.
Yes, your stylesheet will be incredibly broken if you try and use Nesting in an older browser
This is likely true if you blindly refactor an existing stylesheet to use nesting, or try to simply serve an existing SASS stylesheet. However, this 'all-or-nothing' attitude is not necessary and breaks with pretty much every other CSS feature ever added. I'm not saying this is an absolute deal-breaker, but it's not something we should YOLO and take lightly.
There are several situations (think small stylesheets in custom elements) where additional nested rules could absolutely be considered progressive enhancement. We are doing a disservice to users if we ignore those use cases.
The only problem is... / The one case we have to be careful is...
You're making my point for me. I accept that the risk is small, but it's not zero. If we use an at-rule, the risk actually is zero. Yes, we should factor the risk against the likelihood, but even a small percentage is still a lot of web pages as you well know. Furthermore, any risk of conflict is a risk that will narrow future language design choices. My point is that this is a factor and a decision the group should carefully consider, not just blow off because one person thinks it's no big deal.
This issue (like the others) is likely small enough we can consider it a paper-cut, but paper-cuts add up and eventually the bleeding becomes significant. I'm fine with the WG as a whole deciding to accept theses issues, I'm not fine with being saddled with these issues without even a rational group discussion.
If we had no alternatives, yeah, we should just proceed with look-ahead, but we do have alternatives. Let's give them a fair trial and not just ignore them because some people slightly preferred something else way back when before we fully understood all the costs/benefits. That's fair, no?
This is likely true if you blindly refactor an existing stylesheet to use nesting, or try to simply serve an existing SASS stylesheet. However, this 'all-or-nothing' attitude is not necessary and breaks with pretty much every other CSS feature ever added. I'm not saying this is an absolute deal-breaker, but it's not something we should YOLO and take lightly.
We've done it several times, recently: @layer, @supports, etc. CSS's forward-compatible parsing helps us deal with individual properties upgrading their values over time, and independent at-rules being added to the language, but it fundamentally cannot deal with a new wrapping context. Every single time we add something like this, it means new stylesheets will be inherently broken in old browsers.
(Note tho that this has nothing to do with the precise syntax we choose. Any new syntax for style rules, whether we spell it as a plain rule, as a @nest
, or something else, means all such rules will be ignored in older browsers, and your stylesheets will be broken. .foo { color: red; @nest .bar {...} background: blue; }
is also broken in exactly the same way; @nest .foo { color: red; .bar {...} background: blue; }
is broken even worse since the whole rule gets discarded. There is no reasonable syntax that doesn't significantly break your stylesheet in an older parser.)
Also, if an author does want to limit the theoretical damage, they can either (a) put their properties before the nested rules, as the spec recommends, or (b) put a ;
after their rules (the spec doesn't recommend this because it's completely unnecessary if you just put your properties first).
There are several situations (think small stylesheets in custom elements) where additional nested rules could absolutely be considered progressive enhancement. We are doing a disservice to users if we ignore those use cases.
I do not understand how this could be the case. Can you give an example of a stylesheet where an author might add nested styles that they are okay with not applying? Why would they write such a stylesheet rather than writing them as unnested rules? Assuming they have such a reason, why would they, specifically, write these with the unnested properties after the nested rules? (If they put the nested rules after, as the spec recommends and common practice follows, then only the nested rules are dropped; the properties in the top-level rule parse as normal.)
My point is that this is a factor and a decision the group should carefully consider, not just blow off because one person thinks it's no big deal.
We did consider it, a year ago. The entire WG discussed it, we resolved that Option 1 was fine with its design constraints, then later that we preferred Option 3's even looser syntax, with attendant slightly higher design constraints. Now we have an even better approach that is both more flexible for authors and less restrictive on future design. Assuming that our previous resolutions weren't an affirmative vote for heavier design restrictions, the previous resolutions hold here; this is a better result that would have satisfied the past WG as well, had we known it was possible at the time.
This WG's work cannot be indefinitely held up by endlessly relitigating existing resolutions without new data. Continuing to imply that this is me, personally, ramming things thru the WG, rather than the entire WG reaching consensus (with a threatened objection) is both incorrect and offensive.
.foo { color: red; @nest .bar {...} background: blue; }
Is not broken, old browsers see the @nest
as a rule and accept the background property. Try it.
Also, if an author does want to limit the theoretical damage, they can either (a) put their properties before the nested rules, as the spec recommends
Seriously, how many authors actually read specs? There are all sorts of best practices listed in specs that hardly anyone uses. They mostly cargo-cult things they see elsewhere and there have been tons of common practices in the past that were exactly the wrong thing to do.
It's not about whether knowledgeable authors can limit the damage, it's about the rest of the authors that will be surprised by the unexpected behavior.
Can you give an example of a stylesheet where an author might add nested styles that they are okay with not applying?
div[this=that].something:open {
span { box-shadow: 5px 5px 5px #888; } /* This is decorative and I don't care if this applies or not, but I don't want to repeat the long parent selector because people said DRY and nesting is cool */
background: black; /* I didn't read the spec and I like to put nested rules first because I find it easier to read */
color: white; /* surprise! white on white text for several billion users on cheap devices that can't upgrade yet that I never considered or tested on */
....
}
both more flexible for authors and less restrictive on future design
The look-ahead approach is not more flexible than an at-rule, just slightly less typing, and an at-rule is less restrictive, see above.
This WG's work cannot be indefinitely held up by endlessly relitigating existing resolutions without new data
My point is that there is new data. Several of the points I've brought up recently about language design restrictions and error recovery came as a surprise to several members on recent calls, including those who participated in past decisions. That merits a re-evaluation. Several of the early decisions on syntax were mostly based on preferences and did not weigh all the consequences, because they weren't all well known by those expressing their preference.
Also, we can spend far more time debating whether we should be debating this, than simply laying out all the options, the plusses and minuses of each, and just making a call (and not considering inertia or 'shipped' implementations which were never signed of on by the WG). So let's just do that. I'm willing to live with the outcome of that whatever it is, are you?
Continuing to imply that this is me, personally, ramming things thru the WG, ...
I'm not intending to imply that it's you personally ramming things through the WG, apologies if I came across that way. However several people have been trying to force resolutions on nesting without proper notice or time for discussion. Several decisions have been rushed and several times people were unwilling to revisit past decisions even after new information was brought forward. I'm not the only one who's observed 'go-fever'.
You have been expressing your personal opinions about the relative importance of issues, which it totally fine, but those doesn't necessarily represent WG consensus (neither do mine) so can't settle a disagreement on their own. I accept I wasn't part of every historical conversation about nesting, so if I'm interpreting something as a personal opinion where there is WG consensus, please feel free to point me to the minutes and resolutions.
@plinss after reading the recent developments in this thread, I'm a little confused about what you still see as an issue with the currently proposed syntax, apologies if I have missed something. When you were pushing back against option 3 and urging implementers to really research backtracking, I supported this and opened #7961 with a proposed algorithm to minimize perf impact, which served as a starting point, @tabatkins improved on it further, and eventually that discussion resulted in backtracking proving viable, thanks to the incredible efforts of @andruud et al, and we all rejoiced.
But at this point, I just …don't understand what the further pushback is for, or what you are proposing. IMO the current algorithm is better than we could have ever hoped, it's easy to understand for authors, concise to type, fully compatible with @scope
, and as compatible with Sass as possible without introducing undesirable warts into the language. Since this is Agenda+, could you please summarize your concerns and your proposed solution to them? Thank you!
@plinss after reading the recent developments in this thread, I'm a little confused about what you still see as an issue with the currently proposed syntax, apologies if I have missed something.
Ok, let me try to be clear. I'm very grateful for all the work that went into the current look-ahead proposal. I believe it's a large improvement over our previous option 3, and I don't see any room for improvement at the moment. I'm prepared to live with it.
However, it still does have issues, see my list above. These are far less severe than option 3's issues, but remain issues.
We also have the option of an alternate approach, using an at-rule, in one (or even several) of the previously proposed forms. That approach eliminates all the issues with the look-ahead approach at the cost of a small amount of typing for authors.
All I'm asking for at this point is a sanity check. If we pull the trigger on the look-ahead approach, there are ramifications that we're going to have to live with for a very long time. It is a parser change at the end of the day, and as I have said previously, those always prove dangerous.
I want the WG to take a beat, look at all the options, consider all the advantages and disadvantages of each, and make a final consensus-driven call, rather than just move ahead due to inertia and exhaustion.
@plinss Thank you for clarifying. I seem to recall several times when the WG debated these \@rule-based syntaxes, and we always resolved against them, even when the alternative was option 3, which is sigificantly worse than what is currently possible. Why do you think another discussion will resolve differently, now that the alternative is better? Is there new data that might contribute to a different decision?
I'm not asserting that the WG will resolve in favor of an at-rule based approach, or even seriously trying to bring about that outcome (though I do currently favor it slightly). I did propose the look-ahead approach after all, and fought hard to make it happen, so I am obviously fine with it.
My concern is that a non-at-rule based approach has long-term ramifications, there are language constraints and author foot-guns still (though much smaller ones than we had before). And I'm not convinced that all the WG members fully understood those during the previous decisions. As I said above, when I mentioned some of them in calls, there was surprise, so it's obviously new data to some that may have changed the prior outcomes, and is therefore cause to at least re-confirm the prior decisions in light of this new data.
I really just want affirmative consent from the WG, given the current understanding of all the costs before we continue down the current path. When judging those costs, I feel it's essential to consider them in light of an alternative approach, because this isn't a case of 'look-ahead or nothing', we do have another option and nothing us is stopping us from taking it except ourselves.
At the end of the day, if we do an at-rule based approach now, we can always add the look-ahead approach later and make the at-rule optional. (I'd also feel a lot more comfortable making that call later when things have settled down and emotions and stakes aren't running as high as they are now, making decisions under pressure tends to run towards bad outcomes.) Once look-ahead ships (for real, with WG consent), we can never walk it back if we later find the costs are too high and will be stuck with them for the foreseeable future.
As I said, once again, parser changes are dangerous, we need to treat them with care. While an at-rule solution won't make as many people happy today, it's a lot safer, won't potentially cause problems down the road, and doesn't preclude us taking the next step later.
FWIW, I also do not think an at-rule approach has any future. The fact it was rebuked in the working group several times is not a cause of this, but just a symptom of the real cause. The whole point of the nesting feature is to be easier to type and maintain than repetition, so adding an extra indentation level and a sizeable prefix will get a strong opposition from authors, because it's defeating the point of the feature. They will continue to use a module like SASS or LESS or SCSS to compile their code because they will offer a better alternative. If we do not provide authors with a solution at least as elegant as these engines, we might as well not provide anything.
The look-ahead approach, while much better than option 3, still has some issues, namely:
- Requires parser changes - these have bitten us in the past more often than not and must not be taken lightly.
I would argue that the parser changes proposed here are marginal, and will not affect current authors at all.
- Still has some restrictions on future syntax, they are much smaller than option 3, but they exist.
Not being able to add a top-level bracket in a css property value in a position other than the first. This is not a real restriction, we would not consider this at all. I would probably argue that we should never introduced top-level brackets in css property values, this is un-css and confusing. We should at the very least always wrap that in a function.
- Has issues with error recovery. Since declaration recovery and rule recovery differ, unrecognized rules in declaration blocks will eat the following declaration. This will surprise authors.
Do you have an example?
- We still need to define feature detection somehow.
I agree, but this is orthogonal. There is currently no way to detect the support of an at-rule either.
- The syntax, while SASS-like, doesn't have quite the same behavior.
I am not that familiar with SASS, but can someone explain what those differences are?
The whole point of the nesting feature is to be easier to type and maintain than repetition, so adding an extra indentation level and a sizeable prefix will get a strong opposition from authors, because it's defeating the point of the feature
The repetition we save is in the selector list, which can be huge, that doesn't change with an at-rule or any other required prefix. Adding an &
wasn't seen as an unacceptable burden, I don't see why 4 extra characters would make it so. Extra indentation isn't necessary (may be desired in some cases) but hardly defeats the point of nesting. That's a bit hyperbolic. Also, @nest
isn't all that large as far as at-rule names go, we could even make it shorter if 5 characters is an untenable burden at the cost of readability. (I wish we could use a bare @
but older browsers don't treat that as an at-rule.)
There are also multiple forms an at-rule approach can take, and as I said, we can support several or all of them simultaneously.
For example:
1) @nest [selector-list] { [declaration-block] }
, allowed inside declaration blocks. No extra indenting.
div {
background: blue;
@nest span {
background: green;
}
@nest b {
background: yellow;
}
}
2) @nest [selector-list] { [list-of-nested-rules-only] }
, allowed at top level only. Adds potential indenting for primary rule declarations.
@nest div {
{
background: blue;
}
span {
background: green;
}
b {
background: yellow;
}
}
3) @nest { [list-of-nested-rules-only] }
, allowed inside declaration blocks only. Adds potential indenting for nested rules.
div {
background: blue;
@nest {
span {
background: green;
}
b {
background: yellow;
}
}
}
All of the above provide author flexibility and avoid the mixed rules and declarations problem.
I would argue that the parser changes proposed here are marginal, and will not affect current authors at all.
The overall concern about parser changes isn't about impact to authors, it's about impact to the language. The real risk is always things we're not foreseeing now that will come around and bite us later when we try to add something new.
Do you have an example?
See above and several other conversations about how declaration error recovery and rule error recovery differ. By putting rules in a place where declarations are expected, we run the risk of using the wrong recovery mode, which can eat valid following declarations and lead to site breakage in ways that will surprise authors.
The primary benefit of an at-rule approach is that all browsers that ever supported CSS drop into rule recovery mode when they see an at-rule in a declaration block, so there are no side-effects or surprises to authors. It's been the extension mechanism from day 1, and we're proposing to not use it.
I agree, but this is orthogonal. There is currently no way to detect the support of an at-rule either.
It's orthogonal, but shipping nesting without feature detection makes the nesting feature useless for anyone who cares about supporting older browsers, especially during the transition phase as the feature rolls out and isn't universally available (we really don't want to encourage 'best viewed in X' again, do we?). Authors will either ship only nested stylesheets and break their sites for older browsers; ship both, making the user pay unnecessary download costs (violating one of the TAG's ethical principles); or just use a transpiler and ship un-nested stylesheets like they aready do today, and we don't need to bother with the feature.
@supports (@nest)
or @supports at-rule(nest)
would be trivial to add and obvious to authors. We currently don't have a good answer how to feature detect the look-ahead version. So this is a factor to consider.
Authors will either ship only nested stylesheets and break their sites for older browsers; ship both, making the user pay unnecessary download costs (violating one of the TAG's ethical principles); or just use a transpiler and ship un-nested stylesheets like they aready do today, and we don't need to bother with the feature.
Even if you could detect support client-side with @support (@nest)
it does not change this equilibrium. You either ship the code twice, or you don't ship either version at all. Or you ship the legacy one in an @import
which makes legacy browsers download more and have more latency. There is no way to win, here.
But this is nothing new. Javascript has had breaking syntactic changes all over the years. And, yes, for a while, you have to rely on UA sniffing to decide wether to ship the (shorter) modern code or its desugared version. Authors have been sucessfully doing this for years now. There is little downside to getting this wrong, too. Your site will work in both modes, so at worst you are missing out on saving some network bytes on a browser that does support nesting but you weren't aware of.
At the current rate at which browsers ship, in less than 5 years, 99.9% of browsers will support nesting natively and you will no longer need the transpiler at all. Just like everyone feels free to use ES5 today.
With my chair hat off, I am not convinced that we have enough new information to reconsider a wide range of previous resolutions on nesting. The main new information we have is that look-ahead is viable, which to my mind only strengthens the option 1->3->look-ahead evolution. It was consistently preferred (flaws and all) over other proposals, and the look-ahead improvement is likely to increase that preference.
My impression is that we have discussed error recovery issues, restrictions on future syntax, differences between our nesting and SASS, and the perils of changing parsing. Aside from how long it took to get someone to investigate parsing changes, I think the current proposal has been adequately defended against those challenges. I am not aware of new information on any of these that would change my mind, at least.
I am concerned about adequate feature detection, and I am still unclear how authors are meant to use the current proposal in projects that support pre-nesting browsers. But I believe this is a problem for all of the alternatives I have seen.
All this to say I don’t yet see enough evidence in this issue that we should change course. That’s just my personal opinion, and of course I welcome attempts to change my mind.
With my chair hat back on, I am going to move the question of whether the current draft should be updated to the look-ahead-enabled syntax. Since both of those options do mix properties and selectors, that question is I think separable from this issue. I have added a comment to 7961 which seems like the right place for that resolution.
My impression is that we have discussed error recovery issues
If that's the consensus, then so be it, I'll stop complaining, but before we get there, let me point to evidence that error recovery at least hasn't been fully considered in past decisions. See in this thread alone: @FremyCompany asking for examples of the error recovery problem (indicating he's not intrinsically familiar with the issue), yet expresses a strong bias against at-rules; Tab (who I know fully understands the issue), and myself both still getting error recover behavior wrong during conversations; several people expressing surprise during relatively recent calls about the error recovery and syntax restriction issues (well after at-rules were rejected).
My argument is that sufficient evidence exists that all the factors weren't taken into account during previous decisions to warrant at least a 5-minute overview of the relative costs and benefits of the two approaches and a simple sanity-check resolution, confirming we all know what we're buying into. Once again, I'm not advocating against adopting the look-ahead behavior, I just want us to do it with our eyes open and decisions made properly.
If the choice were simply a mandatory @nest
prefix or not, with no other issues or consequences, I wouldn't even be raising this, of course we wouldn't require the prefix. But the choice is between look-ahead, which has other costs and issues, and @nest
that's slightly more verbose, but has no other issues.
And, yes, for a while, you have to rely on UA sniffing to decide wether to ship the (shorter) modern code or its desugared version.
You're not seriously suggesting UA sniffing is the proper way to deploy this feature, are you? I'm somehow thinking that's not going to pass TAG review.
One could argue client hints would be appropriate, but then work needs to be done to specify and implement that.
And, yes, for a while, you have to rely on UA sniffing to decide wether to ship the (shorter) modern code or its desugared version.
I suspect that rather than UA sniffing, authors will just wait a bit until Nesting is supported in a reasonable percentage of browsers, then detect it client side and load an additional stylesheet only for those browsers that don't support Nesting. I believe the styling you get from a stylesheet that uses Nesting in an older browser is a proper subset of all the rules, so that would be a largely smooth progression, it would just involve 2x the download for older browsers, which is a commonly acceptable tradeoff once the set of nonsupporting browsers for a feature becomes small enough. In fact, build tools may even evolve to only ship the nested rules rewritten, so they could work as supplementary and not load the root rules twice (though this may have different results due to rules being out of order, but perhaps the tools would be smart enough to detect that).
I don't want to take this thread even further of topic because the focus here isn't how authors will deal with the transition period, but there are too many inaccuracies here :)
https://github.com/w3c/csswg-drafts/issues/8249#issuecomment-1501062075
But this is nothing new. Javascript has had breaking syntactic changes all over the years. And, yes, for a while, you have to rely on UA sniffing to decide wether to ship the (shorter) modern code or its desugared version. Authors have been sucessfully doing this for years now.
This isn't exactly true. Very few people roll their own UA sniffing. Most ship a single bundle targeted at the oldest browsers they need to support.
Others rely on services like polyfill.io which abstract away the UA sniffing. UA sniffing is a very complex problem to solve because browsers lie, have bugs and because of recent privacy preserving changes.
https://github.com/w3c/csswg-drafts/issues/8249#issuecomment-1501062075
At the current rate at which browsers ship, in less than 5 years, 99.9% of browsers will support nesting natively and you will no longer need the transpiler at all. Just like everyone feels free to use ES5 today.
This is not true.
Not even flex
has 99.9%
adoption and that shipped 10 years ago.
https://github.com/w3c/csswg-drafts/issues/8249#issuecomment-1500997885
just use a transpiler and ship un-nested stylesheets like they aready do today, and we don't need to bother with the feature.
Authors don't want to use tools for anything. If nesting never ships it will also never reach the point when tools become unneeded.
https://github.com/w3c/csswg-drafts/issues/8249#issuecomment-1501737869
I suspect that rather than UI sniffing, authors will just wait a bit until Nesting is supported in a reasonable percentage of browsers, then detect it client side and load an additional stylesheet only for those browsers that don't support Nesting. I believe the styling you get from a stylesheet that uses Nesting in an older browser is a proper subset of all the rules, so that would be a largely smooth progression, it would just involve 2x the download for older browsers, which is a commonly acceptable tradeoff once the set of nonsupporting browsers for a feature becomes small enough.
Authors typically do not like a change that will both slow down their project for end users and add to the complexity of their stack. Even when it only causes perf issues for users on older browsers.
How authors will deal with the transition period is not the issue imho. This is a solved problem.
Nesting is just syntactic sugar, so authors don't lose out on much by transpiling.
They will simply continue to transpile with existing tools and it will be rare for anyone to actually ship nested css. They will continue to do this for a long time because the benefits of transpiling will continue to outweigh the drawbacks.
This is not a bad thing. There simply already is a trivial way for authors to write standard/native css and support the older browser version that their project requires.
I feel we need a new thread to discuss how authors will handle the transition period, and what we can do to improve that experience.
But, some replies to @romainmenke above:
This is not true. Not even
flex
has99.9%
adoption and that shipped 10 years ago.
It has 99.23%, which is pretty close.
And authors do use Flex (and grid) without shipping alternatives. They don't generally need features to reach 99.9% support to use them without backup, anything over ~93% or so tends to be considered good enough for that in my experience.
(and let's not forget that even font-size
only has 96% (!))
Authors don't want to use tools for anything. If nesting never ships it will also never reach the point when tools become unneeded.
I'm a bit confused at this. The rest of your comment seems to be making the point that authors will simply never use native Nesting and will just continue to preprocess until the end of time. Here you're saying authors don't want to use tools for anything. Will they, or won't they use tools to transpile Nesting?
Authors typically do not like a change that will both slow down their project for end users and add to the complexity of their stack. Even when it only causes perf issues for users on older browsers.
I disagree.
@LeaVerou I think the tone/intent of my comment got lost. I think we are saying similar things.
I feel we need a new thread to discuss how authors will handle the transition period, and what we can do to improve that experience.
Yes, that would be better than taking this further of topic :)
And authors do use Flex (and grid) without shipping alternatives. They don't generally need features to reach 99.9% support to use them without backup
The incorrect statement was that any feature would reach 99.9% in five years. If numbers are used to back things up, it is important that these are somewhat accurate.
I didn't state anything about what 99.9% or any other level of support means for CSS authors, only that 99.9% in five years will be extremely unlikely.
Authors don't want to use tools for anything. If nesting never ships it will also never reach the point when tools become unneeded.
I'm a bit confused at this. The rest of your comment seems to be making the point that authors will simply never use native Nesting and will just continue to preprocess until the end of time. Here you're saying authors don't want to use tools for anything. Will they, or won't they use tools to transpile Nesting?
This was in response to :
Authors will either ship only nested stylesheets and break their sites for older browsers; ship both, making the user pay unnecessary download costs (violating one of the TAG's ethical principles); or just use a transpiler and ship un-nested stylesheets like they aready do today, and we don't need to bother with the feature.
Arguing that the tools required during the transition period make the entire feature redundant doesn't make sense, exactly for the reasons you listed.
Authors typically do not like a change that will both slow down their project for end users and add to the complexity of their stack. Even when it only causes perf issues for users on older browsers.
I should have elaborated a bit more here.
If we imagine that your proposal is supported by tooling, authors will have two choices :
Option 1 doesn't require CSS Authors to make any other changes to their project. Simply pass the stylesheet to whatever tool that can desugar nesting and what comes out will work everywhere. After the transition period the transpiling tool can be disabled/uninstalled.
Option 2 requires CSS Authors to:
Option 1 is simpler and given that nesting is purely syntactic sugar there are few drawbacks to transpiling during the transition period.
Most transpiling tools make it trivial to skip certain transforms during active/local development. Which give the benefits of rapid prototyping as you mentioned.
If this were true, nobody would be using polyfills, which are regularly much slower than writing code without the new technology that is being polyfilled in the first place. With polyfills one typically exchanges speed in modern browsers and codebase simplicity for slowness in older browsers, and this is a tradeoff I've seen authors make over and over.
The choices I am weighing are :
When the second option is also more complex to setup, it doesn't make much sense to go down that road purely for the transition period.
Some authors might chose to do so, and that is fine.
My argument was not that other strategies are invalid, only that a simpler alternative already exists : transpile for everyone for as long as required for a given project.
When I mention transpile
or transpiler
I am never talking about Sass, only about tools that aim to desugar standard css nesting.
Perhaps we can move the transition discussion to https://github.com/w3c/csswg-drafts/issues/8399? Or should it be a completely new issue?
The CSS Working Group just discussed [css-nesting] Problem with mixing properties and selectors
, and agreed to the following:
RESOLVED: Close no change
Summarizing the resolution and discussion:
There are several minor issues with a SASS-style syntax for nesting we have identified so far. They include
We had consensus in the breakout session that even considering all of these issues together, we still plan on pursuing the SASS-style syntax. Working on solutions to these issues (where possible) will happen in separate issues.
But if there is anyone that was not in the breakout session who finds this set of issues a compelling reason to change course and NOT pursue SASS-style syntax, please do speak up here. And when new issues arise, we should take a moment to add them to what remains in this list as a checkpoint to reconsider whether the mass of minor issues has become too large.
I wasn't in the session. I think these issues are concerning, and I prefer the so-called options 1 or 4 which don't seem to have these problems. But I won't object to the current thing.
BTW, about "Using SASS-like syntax, but not matching some SASS behavior", have the SASS developers said anything about this? They have requested CSS changes in the past when there was a clash, like renaming @if
to @when
.
BTW, about "Using SASS-like syntax, but not matching some SASS behavior", have the SASS developers said anything about this? They have requested CSS changes in the past when there was a clash, like renaming
@if
to@when
.
@mirisuzanne was on the call and didn't raise concerns regarding this, as far as I can tell. Though I may be missing earlier comments in which she expressed any concerns.
Sebastian
Two clarifications:
1) The cascading behavior is an issue for either an at-rule or the SASS-style approach, so it shouldn't be part of the consideration here (I think). All of the other issues go away entirely with an at-rule approach.
2) The proposal on the table wasn't to "NOT pursue SASS-style syntax" anymore, it was to not ship SASS-style syntax now. We have the option to ship an at-rule syntax now, and continue to refine the SASS-style approach over time, and then ship that when we feel it's ready. That effectively just makes the @nest
prefix optional later (authors who prefer the explicit rule at-rule could still use it).
(And for those not on the call, I don't think any of the issues above individually are show-stoppers, it's just when you consider the complete list of the issues vs the advantages is when I start to have concerns. As with @Loirooriol above, it's not enough for me to object, but it was enough to go hmmm and ask the question.)
I wasn't in the call (got stuck in traffic) but I personally still think we are on the right track, and I'm not concerned about most of these issues.
However, I would also be fine with a syntax that mandates that declarations have to precede rules, as I think that's good authoring practice in general (and would make the backtracking needed even more efficient, as it only needs to be done max once per rule). We decided against an author-facing parser switch (i.e. syntax authors would have to use to invoke the other mode). An automatic parser switch managed by the browser as it encounters syntax would be fine by me, and seems very easy to teach. It is also very compatible with allowing them to be intermixed in the future. Reading the minutes, @jensimmons brought up a concern that this would restrict use cases, but did not elaborate further (or wasn't minuted doing so):
jensimmons: I dont' like the idea of requiring that declarations go before nested rules ... like plinss says, people jam CSS into things all over the place ... it will make things less useful
I was wondering what cases you had in mind?
Also, could someone explain this further?
Unintuitive cascading behavior when declarations come after rules
Is this that in cases like these:
.foo {
color: red;
& {
color: green;
}
color: blue;
}
You'd expect the color to be blue, but it's actually green?
@LeaVerou A discussion about "a syntax that mandates that declarations have to precede rules" needs to go in a new issue. This issue is only about whether/not we should stop moving forward with the current direction, and change course. Which we decided today on the call, no.
OK, here’s where I think we should move subtopics from this issue:
Issues with error-recovery -> https://github.com/w3c/csswg-drafts/issues/8349
Restrictions on future-syntax -> https://github.com/w3c/csswg-drafts/issues/8251
Using SASS-like syntax, but not matching some SASS behavior This has been discussed in a few issues like 2937 and 3748, where we have decided for the most part not to worry about this. If anyone is still concerned with the differences, please open a new issue.
Parsing changes are risky -> https://github.com/w3c/csswg-drafts/issues/7961
Unintuitive cascading behavior when declarations come after rules We have gone back and forth a few times on whether we should allow declarations after rules, and if we do how do we handle them (in the cascade and in CSSOM). I have not found a single issue that seems like an apt place to continue this discussion, so I think we should have a new issue on this.
Closing this issue, as it's been split into subtopics and generally resolved at this point.
(Hm, we don't have a great label for htis, so I'm gonna mark it as invalid, as I think that's closest to "no longer contains a relevant issue needing to be addressed".)
As mentioned I'm opening a new issue to outline the objections I raised in the call today and to discuss paths forward.
My primary objection to the current syntax is that mixing properties and selectors without a clear distinction places unacceptable (to me) restrictions on the future design of the CSS language.
An implementation parsing a style declaration will need to presume anything that doesn't start with an identifier or function is a selector, which restricts our ability to ever extend property declarations with anything that doesn't start with an identifier or function. Furthermore this restricts our ability to ever extend the definition of an identifier or a function. As an example, if this were implemented first, we could never have added custom properties with the current syntax (which redefined identifiers).
Alternatively, we could limit selector combinators to the current set and a limit extension path, like
/<name>/
, this would place restrictions on future selector syntax and potentially add more confusion as to the rule of when a&
or the:is()
hack is required. Not a fan of this.I see two paths forward (and welcome other suggestions):
1) We remove the lookahead restrictions on the parser and 'simply' adopt the SASS syntax. The lookahead restriction came about 25 years ago when there were real concerns that CSS would be too slow to ever implement in a browser and everything was focused on performance. I'd like to see some experimentation and real-world data to check that assumption and see if advancements in processor speed and RAM availability allow us to relax that.
2) We add something that clearly distinguishes selectors from properties within a style declaration.
Something like:
is fine by me, but I accept that this has been proposed in the past and rejected.
A compromise I'd be OK with would be treating a bare
@
inside a declaration as the equivalent to@nest
(and possibly allowing@nest
to be optional for those wanting clarity). This is functionally equivalent to requiring the&
(which many people in the polls preferred), but also handles cases where the&
isn't the start of the selector without adding lookahead. e.g.:This leverages the fact that
@
is already CSS's universal escape hatch and clearly distinguishes properties and selectors, allowing unrestricted extensions of either in the future. It also minimizes verbosity as the majority of nested selectors can simply start with an@
and requires no other changes or special rules to learn.