Closed emilio closed 4 months ago
We forbid declarations after nested rules
Seems potentially problematic that some people may have some garbage followed by lots of declarations, then at some point we add some feature that parses the garbage as a nested rule, and then all the declarations stop applying. Spooky action at distance.
One of the main characteristics of option 3 (later modified with lookahead) is that it allows freely mixing declarations and nested rules. So at this point I don't think it makes sense to restrict that. The proper way to have such restriction would have been choosing option 4 or similar (which BTW seemed better to me).
Right, I argued in previous discussions that forbidding decls after rules is fundamentally problematic. The basic question is "when has a rule occurred?", and this needs to not be dependent on whether a rule is valid or not, as that would mean different browser levels would interpret following properties differently.
So you need to define some notion of when we've found a rule, that allows for invalid rules, and do so in a way that doesn't unduly restrict our future syntax options.
For example, if we ever allow {} in a property, and you write it invalidly so it triggers rule parsing, would that kick the "now there's a rule" switch? Does that mean we actually have to forbid ever using {}
in properties?
On the other hand, sticking with the current design that just allows them, and relying on people to not write unreadable stylesheets, suffers from none of these issues.
Not saying it's necessarily a good idea, but playing devil's advocate a bit, we already have that kind of concept for e.g. @import
/ @namespace
/ etc, don't we?
Imagine someone wrote this ten years ago:
:has(body) {}
@import "something.css";
(Or something along those lines)
Their @import
would stop working on browsers that support :has()
, yet that hasn't been ever a concern (and I don't think it should be).
The switch for that is "A valid rule has been parsed", and I think we could do the same, so that random garbage doesn't cause that action at a distance.
I think these are pretty different situations in practice, tho. We add to the set of "things that can go before @import
" super rarely -- in fact, we've only done it once, with @layer
.
On the other hand, we add new things that would qualify for nesting fairly regularly - we recently added @scope
, and just resolved to add @initial-name-tbd
. So the set of things that may or may not trigger "no more properties from here on" is regularly changed, presenting new hazards.
Ultimately, the question is still - what are we trying to protect authors from? This sort of interleaving has been allowed in Sass and related tools for many years, with the exact same "pretend all the properties are together at the top" behavior, and never caused notable issues.
For example, if we ever allow {} in a property, and you write it invalidly so it triggers rule parsing, would that kick the "now there's a rule" switch? Does that mean we actually have to forbid ever using
{}
in properties?
If you mean the property value and I interpret the algorithm correctly, then {}
is already allowed by consuming a component value in step 5 of consuming a declaration. And it can't trigger rule parsing because the consumed component value is appended to the declaration's value.
Sebastian
This is in reference to the resolved-on new parsing algo that tries to parse as a decl, then falls back to parsing as a rule if the result wasn't valid. That algo isn't in the Syntax spec quite yet.
That algo isn't in the Syntax spec quite yet.
Should we expect more changes to come (in CSS Syntax 3, at least) after this commit?
color: red; @media { foo: bar } color: green
does not seem to parse as I would expect against <declaration-list>
, probably because there is no ;
after foo: bar
. I think }
should be provided as a stop token to consume a declaration.
Thanks for spotting that! I hadn't gotten around to updating my own parser to the new text to test it, so I missed that I was mishandling nested constructs.
I believe it's all good now; the three construct consumption algorithms (at-rule, qualified rule, declaration) now all take a "nested" bool, which triggers them to stop early when encountering a top-level unmatched }
. This is passed by "parse a block's contents" (which is renamed from "parse a declaration list" since it definitely returns more than just declarations).
<declaration-list>
is defined with consume a list of declarations and the procedure to parse the input of CSSStyleDeclaration.cssText
is defined with parse a list of declarations. Both procedures are removed.
I do not mind waiting for an update to your parser to validate your updates. This would prevent me from polluting this issue with implementation details.
That said, <declaration-list>
could be renamed to <statement-list>
and could be parsed with consume a list of statements.
This would allow using <declaration-list>
for when a list of declarations (strict) is required (eg. keyframe rule, @font
, etc). But I do not know if it would be backward compatible to apply it to existing rules, and your intent may even be to preserve this flexibility to accept declarations/rules within any rule, for future extensibility.
I've done the parser update, but had to stop at end of day before I could get my testing framework updated to the new data structures. Later today I'll have it working. ^_^
But I do not know if it would be backward compatible to apply it to existing rules, and your intent may even be to preserve this flexibility to accept declarations/rules within any rule, for future extensibility.
Yes, all rules already handled at-rules in their blocks even if they only accepted declarations validly, so that's staying, and at that point there's not really any reason to continue having a parsing split. All blocks are parsed the exact same way now in Syntax, accepting all three kinds of constructs (at rules, qualified rules, declarations).
My plan on reshuffling the productions is just to define one generic production, and then probably some sub-productions that automatically imply certain restrictions on what's valid so you don't have to say it in prose. But they won't change the parsing behavior.
Unfortunately, it appears that Chrome shipped before we could decide on this, and now authors are already blogging about the "gotcha".
Hopefully it's not too late to change this. I think we should really try to avoid any rewriting that changes order of declarations.
Since we’re already resolved that nested MQs will wrap their contents with an implicit & {}
, perhaps we can do the same thing with any declarations after the first nested rule, as @emilio proposes in the OP?
I still feel very strongly that we should not change this, and the current behavior is the best. Again, this is the exact behavior that Sass (and I suspect other preprocessors) have had for a decade+ already, and it has not been a problem there. (Largely because people just don't write that code - they put their declarations first, then their nested rules.) I don't think we should try to add more behavior for something that has proven itself to not be a problem in practice.
MQs wrap all naked properties in an & {}
; style rules obviously cannot do this, so the behavior would be inconsistent either way. As the spec is currently written, style rules and MQs are each internally consistent with a single behavior for all naked properties. I think it would be a (probably minor) bad thing for style rules to have two behaviors, depending on relative ordering of rules and declarations.
While I would like to see a world where the order would stay as written, I think I agree with Tab on that this is the behavior that was implemented in all preprocessors: less, sass and stylus, at least some CSS-in-JS solutions (tested in styled-components, couldn't find a good playground to share), and nested
PostCSS plugin (can be tested here), lightningCSS (but it polyfills the current spec).
Given this was done literally everywhere, I'd say it is not worth it to change this.
The CSS Working Group just discussed [css-cascade] [css-nesting] Figure out whether we're fine with "shifting up" bare declarations after rules
.
I thought it was very odd that the WebKit poll seems to be favoring Option 1 (shifting up), so I posted a couple more polls:
which so far seem to be showing a very different picture with more than 3 out of 4 expecting no shifting. There are even people incorrectly explaining what happens (1 2)
For the minority that expects the current behavior, it often seems to be due to misconceptions around how specificity works (1 2 3 4 5). Others mentioned that while not shifting feels more natural, shifting has been beaten into them by existing implementations (e.g. 1 2 3 ).
There is also this older poll but it's phrased as a quiz, which influences the data, as people expect the result to be weird. Even so, it shows an even split between shifting and no shifting.
I think the current behavior is extremely unintuitive, and 10 years down the line we will regret opting for consistency with today’s preprocessors over predictable, natural behavior. In fact, we even have a TAG principle advising against this exact thing: Prioritize usability over compatibility with third party tools. I'd even vote for entirely disallowing declarations after rules over the current behavior, as it would give us more time to figure this out.
I’m quite concerned that wording seems to influence the result so much, as it indicates we don't yet have a good picture of author expectations. Perhaps we need more data here. Maybe an MDN short survey would help?
Agenda+ to resolve to temporarily disallow declarations after rules while we try to get better data here, so that we don't get stuck in a situation where we can't change the behavior due to web compat.
FWIW we (WebKit) were also very surprised by our poll results (it actually changed over the weekend, the first days were Option 2 winning 70/30).
Disallowing declarations after rules seems the "worst" solution because it means we have to determine the "a rule have been parsed" trigger which could cause some compatibility issue if we extend what correctly parse as a rule in the future. However, I'm not sure what would be the risk if we allow declarations after rules like now and just follow the cascade so last one wins?
Might be good to resurface/link the CSSOM aspects of this as I don't see any mention of those in this issue. If I recall correctly it was an issue for CSSOM if declarations and rules could be interleaved.
(maybe there is a clever way to resolve those?)
Disallowing declarations after rules seems the "worst" solution because it means we have to determine the "a rule have been parsed" trigger which could cause some compatibility issue if we extend what correctly parse as a rule in the future.
Could you please elaborate on that? Not sure I follow what you mean by "a rule has been parsed trigger". FWIW I don't think disallowing declarations after rules is a good long-term solution; I'm only proposing it so that we don't back ourselves into a corner while we deliberate and time passes.
However, I'm not sure what would be the risk if we allow declarations after rules like now and just follow the cascade so last one wins?
I think the argument against that is inconsistency with preprocessors (and perhaps the existing Nesting implementations? Though usage of these in the wild right now is effectively nil, and even smaller where this changes things). Not sure if there's any other counterargument.
Might be good to resurface/link the CSSOM aspects of this as I don't see any mention of those in this issue. If I recall correctly it was an issue for CSSOM if declarations and rules could be interleaved.
(maybe there is a clever way to resolve those?)
From what I remember, we resolved that by resolving that bare declarations can be wrapped with & { ... }
, though I can't find that right now.
Could you please elaborate on that? Not sure I follow what you mean by "a rule has been parsed trigger".
Yeah, I've elaborated on this in the past when Emilio suggested disallowing properties after rules.
So, this is a parsing switch. In theory parsing switches are doable; we explored a few of these earlier when working thru Nesting options. But the switch needs to be reliable - the earlier discussion about @nest;
being the switch worked, because we know exactly what to look for and don't expect that to change in the future.
But if the parsing switch is "a rule of some kind is seen", that's hard. The most obvious interpretation of that is "a valid rule of some kind is seen" - that's well-defined, but it means that authors can see unexpected differences in parsing behavior if the rule in question is supported in some browsers but not others, as older browsers will throw out the rule and continue allowing declarations, while newer browsers will see it and disallow declarations. (And consider: an invalid selector makes the style rule invalid, and we do new selectors all the time.)
So ideally we define invalid rules as also triggering this. But then we open up the full syntax space of what an "invalid rule" actually is. Is foo bar:baz {...};
an invalid rule? Or is it a new property syntax? We can define something for this, but it means restricting our future evolution capabilities, to a larger extent than what we've already done for Nesting.
FWIW we (WebKit) were also very surprised by our poll results (it actually changed over the weekend, the first days were Option 2 winning 70/30).
Have the results been restricted? I remember seing opposite numbers yesterday.
I watched the results of the poll very carefully in the first couple days. It was consistently 37-39% for Option 1, and 61-63% for Option 2.
It's very telling when you hit enough votes (like 50 or 100) and the results stabilize. As more and more votes came in, the results did not change.
Sept 28 at 2:50pm ET.
Sept 29 at 12:50pm ET.
Then over the weekend, another 1800 votes came in, with an overwhelming preference for Option 1. Almost three times as many votes came in over the weekend? Long after we stopped promoting the article on social media? With a radically different result?
That's a bot.
So we decided to close the survey and post the last known results from before the traffic pattern became highly suspect.
You can see a similar preference for Option 2 in Lea's survey: https://mastodon.social/@leaverou@front-end.social/111177156433448874
I agree with Lea:
I think the current behavior is extremely unintuitive, and 10 years down the line we will regret opting for consistency with today’s preprocessors over predictable, natural behavior.
A desire to match Sass is a terrible reason. (Especially if the only reason Sass & other tools made their choice is that they could not implement the more intuitive behavior.)
We should be designing the language for the future — for 20+ years from now, when the majority of developers have never used Sass, and those that did don't remember how it worked.
I've not heard any other reason put forth besides: this is how all the third-party tooling does it, and we don't think it's a big deal. I haven't heard anyone arguing that this design is good for the cascade, or makes sense, or is something future developers will easily learn.
I believe this will be a very confusing problem for developers trying to debug their code if we don't fix it.
I think I was initially for option 1, but gradually moved towards option 2.
Mainly, deciding for me are the “designing the language for the future” combined with the case being very rare and usually considered a bad practice by itself. If it was something that is present in all current preprocessors, but very commonly used it would be a different story, but this is a case of an edge-case, where preprocessors decided to interop for some reason.
I think it should be ok to handle this edge-case differently, but properly.
Well said @jensimmons. I would urge anyone who wants to weigh in on this to also read the responses to both of the polls I posted. They are even more illuminating than the (sweeping) quantitative data.
I do not believe it's correct for these two blocks to end up with different results:
h1 {
color: yellow;
@media (width > 0) {
color: red;
}
color: green;
}
h2 {
color: yellow;
@media (width > 0) {
color: red;
}
& {
color: green;
}
}
(Try it in a browser with support for CSS Nesting: https://codepen.io/jensimmons/pen/KKbrJBp/5861d875920bb25695c12975bf627b75?editors=1100 )
The ampersand should be a clean substitute for the unnested selector, not something that changes the result.
I've used Less a lot, and have been using Sass for over 6 years now on a huge strictly Sass code base, and I've never stumbled on this behavior, I'm actually surprised by it. I guess everyone have been writing declarations before rules everywhere.
So IMHO this is also counter intuitive for most devs whom are using preprocessors, that expect the expected CSS behavior of "latter wins".
I think it's clear that the current behavior is quite bad because it breaks the principle of "last declaration (with same specificity) wins", I'd have objected against option 3 if I had realized this at the beginning. But at this point we are stuck with that syntax, and trying to address this problem within option 3 seems to cause even worse outcomes, so probably we will have to live with it. Could be added to the list of CSS mistakes.
(Especially if the only reason Sass & other tools made their choice is that they could not implement the more intuitive behavior.)
They absolutely could have implemented either behavior; it's just outputting a separate rule rather than combining into one rule. They do exactly that if you do wrap the latter declaration in a & {...}
rule, so both behaviors are clearly possible.
I do not believe it's correct for these two blocks to end up with different results:
I think the consistency argument is reasonable in either direction. Before nesting, if you wrote:
h1 {
color: yellow;
color: green;
}
you'd get a single 'color' declaration with the value green. One can reasonably argue, I think, that it's also consistent that adding an unrelated rule (the @media
in your example) shouldn't change the behavior of these declarations. I suspect that might be why the preprocessors originally chose the behavior that they did. Adding an explicit & {...}
wrapper around the latter declaration is a much stronger declaration of intent than just inserting an unrelated rule before it.
We should be designing the language for the future — for 20+ years from now, when the majority of developers have never used Sass, and those that did don't remember how it worked. ... I believe this will be a very confusing problem for developers trying to debug their code if we don't fix it.
Do we have any evidence that this is actually confusing to users of Sass, Less, or any other preprocessor? So far all I've seen is people arguing that, now that it's been pointed out to them, it's kinda confusing; I haven't seen any evidence so far that this is actually confusing developers in the wild, despite over a decade of usage and millions of users.
(I'm not disputing that there might be such evidence, I simply haven't seen any.)
I have no strong opinion on which way we go for this. But the fact that the current spec is the behavior of essentially every preprocessor, and afaict there have been approximately zero complaints about it for over a decade of use (because, again afaict, nobody actually writes code like that in the first place), means that there's very little reason for us to care about what the behavior is either. As such I'd prefer no change, as compatibility with the wider ecosystem is a (relatively minor) benefit, but I won't object over the rest of the WG if the decision goes the other way.
@tabatkins
If preprocessor users haven’t really stumbled on this in the first place, compatibility with preprocessors is not a benefit, minor or otherwise. It's only a benefit when it's compatible with behaviors they have, actually, experienced. Adding something to CSS is a much wider deployment than adding it to a preprocessor, so “people haven't hit this problem before” should not be an excuse for weird behavior.
If you read through the thread and the various polls, there is a very strong signal from developers that the current behavior is confusing. Even worse, for the few that don't find it confusing, it’s due to a broken mental model about the cascade: they thought that @media
adds specificity. So I’m quite worried not just about the ergonomics of this, but also what it teaches authors about the rest of CSS.
And it's not like there's an actual implementation reason for the confusing behavior, right? It seems we all agree (?) that changing it produces better ergonomics. So what's the argument for keeping it? Compatibility with Sass and co? We literally have a TAG principle about this exact thing: 2.12 Prioritize usability over compatibility with third-party tools.
An argument for keeping it could be performance, mostly. E.g., if you do something like:
.foo {
--bar: baz;
@media (a) {
--bar: something-else;
}
--baz: ...;
@media (b) {
--baz: something-else;
}
// Repeat x100 etc
}
If we generate a bunch of split rules for anything after an @media
rule, that can cause useless overhead, which is also surprising.
Implementation wise, I think it's a bit more complicated to implement the "last declaration (after rules) wins" if we assume the most obvious way : wrap following declarations in & { }
& { }
while they are not currently (because some garbage becomes a valid rule) (I doubt it's an actual issue though?):is()
and &
(and it's already a weirdness so maybe we don't care neither).<style>
.foo::before {
background-color: blue;
content: 'nonest';
}
.foo::before {
& {
background-color: green;
content: 'nest';
}
}
</style>
<div class=foo>Foo</div>
There might be a better way to implement all this though.
@emilio
An argument for keeping it could be performance, mostly. E.g., if you do something like: [snip] If we generate a bunch of split rules for anything after an
@media
rule, that can cause useless overhead, which is also surprising.
Given that this doesn’t happen much, I don’t think that would be significant in practice. Also, I suspect when people write code that way (with a declaration, and then a MQ after it to set just that property), the MQs are not that varied, so a low-hanging optimization would be to merge them together when it doesn’t change the outcome. Meaning:
.foo {
grid-template-columns: auto 1fr auto;
@media (width < 500px) {
grid-template-columns: auto 1fr;
}
gap: .5em;
@media (width < 500px) {
gap: .3em;
}
}
becomes:
.foo {
grid-template-columns: auto 1fr auto;
gap: .5em;
@media (width < 500px) {
grid-template-columns: auto 1fr;
gap: .3em;
}
}
This is also consistent with our rule to serialize to the shortest equivalent syntax.
@mdubet
the serialisation will generally be more verbose
Not if we go the other way and decide to serialize all & {}
s after rules as bare declarations. Which would also be consistent with our rule to favor the shortest equivalent syntax on serialization. This would also be more likely to preserve author intent, since I bet most & {}
rules would have come from wrapping bare declarations, rather than specified explicitly by the author.
I don't think we have a precedent for rewriting / merging rules, and I don't think we'd want to add such thing.
I don't think we have a precedent for rewriting / merging rules, and I don't think we'd want to add such thing.
Wrapping declarations in & {}
is also technically rewriting, so I think that ship may have sailed?
I wonder if we should consider how this behavior relates to similar future features, e.g. mixins, if we eventually go down that path. For example, an @apply --x-mixin;
(or similar) rule could be treated as an extension of nesting – a placeholder for & { <output of --x-mixin> }
. With mixins it is more clearly important that authors have an easy way to override the output by providing additional declarations after the mixin. In fact, most Sass 'best practices' have encouraged putting mixins before declarations rather than after.
Clearly, nesting and mixins wouldn't need to behave the same (they don't in Sass), but it seems like a useful comparison to consider? Even if we don't use the same solution for both, mixins are likely to raise the same issue.
The CSS Working Group just discussed [css-cascade] [css-nesting] Figure out whether we're fine with "shifting up" bare declarations after rules
, and agreed to the following:
RESOLVED: We will address ths issue, and fix nesting to allow for bare declarations after nested rules without moving them above.
@leaverou
If preprocessor users haven’t really stumbled on this in the first place, compatibility with preprocessors is not a benefit, minor or otherwise.
Right, but my point is that neither behavior is a benefit, according to over a decade of experience. This simply does not matter to authors, as far as we can tell. So the "benefit to authors" part of the PoC is approximately 0; at best it's a learnability benefit, but better learnability for a case that doesn't appear to ever happen in practice is worth extremely little. So impl benefits, even minor ones, can weigh sufficiently here to sway the outcome.
a low-hanging optimization would be to merge them together when it doesn’t change the outcome.
We cannot do this. What things are raw decls and what are rules is observable in the OM. I absolutely do not want the OM to be dependent on "when it doesn't change the outcome".
Wrapping declarations in & {} is also technically rewriting, so I think that ship may have sailed?
No, it's not at all rewriting, in the sense that you're suggesting. How we interpret things on the first pass is up to us; you're talking about a completely different thing where something would be wrapped in a rule or not depending on the exact property names used across/between other rules. (Whether or not the first @media
in your example used 'gap' would affect whether it "changes the outcome"!) That's an extremely non-local and non-obvious effect, and I'd object strongly to it.
Not if we go the other way and decide to serialize all & {}s after rules as bare declarations. Which would also be consistent with our rule to favor the shortest equivalent syntax on serialization.
We can't do this either, because it would mean that two consecutive &{...}
blocks would roundtrip into a single one. And I don't think we want to specify that the serialization differs based on whether there's a single or multiple blocks with a particular selector in a row.
@miriamsuzanne
With mixins it is more clearly important that authors have an easy way to override the output by providing additional declarations after the mixin. In fact, most Sass 'best practices' have encouraged putting mixins before declarations rather than after.
This is a fair argument, and does argue for a real author benefit in doing the rule-wrapping. Sass gets away with it because it's just doing direct substitution of the @include
with its contents.
Note that Sass's behavior is inconsistent in this regard; for example:
@mixin reset-list {
& {
margin: 0;
padding: 0;
list-style: none;
}
}
ul {
@include reset-list;
margin: 1em;
}
/* compiles to */
ul {
margin: 1em;
}
ul {
margin: 0;
padding: 0;
list-style: none;
}
So the "trailing" margin: 1em
gets put before the mixin in the cascade and will be overridden! (Defining the mixin with naked properties rather than a nested rule does do what you suggested, tho.)
The CSS Working Group just discussed [css-cascade] [css-nesting] Figure out whether we're fine with "shifting up" bare declarations after rules
.
I was able to find a Sass issue desiring the no "shifting up" behavior from 2014. It is still open and tagged as "planned", though seemingly no work has been done on it since.
Why is this even a discussion? 🙄︀
Example 1, color used is green:
p
{
color: red;
@media (width > 0) {color: yellow;}
color: green;
}
Example 2, color used is green:
p
{
color: red;
color: yellow;
}
@media (width > 0) {p {color: green;}}
All the rules are at the same "level" with a simple p selector and lack !important.
The longer this ticket stays forgotten, the harder it will be to change in the future.
We should at least forbid declarations after nested rules until the final decision is done. Current behavior is confusing and can be considered as a mistake that will bother people forewer if not fixed.
@vrubleg Not forgotten, I'm investigating the feasibility of this. I had high hopes that we could make this change, but the use-counter I've added so far shows that ~0.16% of pageloads already place bare declarations after nested rules. This is too much to make a potentially breaking change.
I'm now working on making a tighter use-counter, as spot checks of actual sites make me believe there often isn't an actual contest between the bare declarations and the nested rules (e.g. they use non-overlapping sets of properties).
Since the existing behavior follows the pre-existed preprocessors logic and authors start to get used to it, wouldn't it be the easiest solution just to add "nesting depth" as an explicit cascading order criterion between Specificity and Order of Appearance?
Since the existing behavior follows the pre-existed preprocessors logic and authors start to get used to it, wouldn't it be the easiest solution just to add "nesting depth" as an explicit cascading order criterion between Specificity and Order of Appearance?
The Cascade is already insanely complicated, and we've reached the point that most authors don't grok it fully. We should not be increasing its complexity further, and definitely not without good reason!
It seems to me that adding one more order criterion to 6-7 existing adds about the same cognitive load as complicating one of the existing criteria with de facto a new implicit reordering rule, but an explicit step in the algorithm would be probably a bit clearer and easier to grok for many people 😳
Possibly true, but there's also the option of not shifting!
Just pointing out that pseudo-elements aren't the only problem, there is also #9806
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<style>
/*<![CDATA[*/
@namespace url("http://example.com/foo");
@namespace svg url("http://www.w3.org/2000/svg");
svg|svg {
background: red;
& * { --foo: bar; }
background: green;
}
/*]]>*/
</style>
<svg id="svg" width="100" viewBox="0 0 100 100" xmlns="http://www.w3.org/2000/svg"></svg>
</html>
This is currently green in browsers. But if we convert it into
svg|svg {
background: red;
& * { --foo: bar; }
& { background: green; }
}
then it's red, because &
is not a type selector so it can only match in the default namespace, and our element is in the svg namespace.
Not sure why this is tagged Agenda+, so this may or may not be relevant, but to follow up from January: improved use-counters are implemented in Chrome 123. Results are not in yet.
The CSS Working Group just discussed [css-cascade] [css-nesting] Figure out whether we're fine with "shifting up" bare declarations after rules
.
It seems that all the proposals try to work around the fact that we can’t mix declarations and rules because the OM CSSStyleRule
/CSSGroupingRule
doesn’t allow this : so we either shift the bare declarations up to group them (current behaviour) or rewrite the following bare declarations to look like rules (and because we can’t do that for the first group of declarations inside the rule, we try to find a switch point like first-valid-rule or @nest
).
What about introducing a new CSSBlockContent
(with contains a list of declarations and rules intermixed) ? It would be the content of style rule, at-media rule, at-container rule, at-scope rule.. - all nested group rules https://drafts.csswg.org/css-nesting/#nested-group-rules
For backward compatibility, CSSStyleRule
would derive from it but would also maintain the current CSSStyleDeclaration
API behaviour - overwriting similar declarations and ignoring interleaved rules ; but with the new behaviour visible through the CSSBlockContent
API to get the list of rules/declarations in the actual cascade respected order.
It introduces some complexity for implementation, but seems doable ?
If you do:
My understanding is that per spec the div color and background would be red.
That seems rather confusing. There are various alternatives here:
Maybe something else?
cc @fantasai