WICG / layout-instability

A proposal for a Layout Instability specification
https://wicg.github.io/layout-instability/
Other
159 stars 27 forks source link

layout instability spec needs to clarify whether (and which) unpainted nodes are considered or not #61

Closed dholbert closed 3 years ago

dholbert commented 4 years ago

Right now, the layout instability spec officially considers a moved element as contributing to instability if the element "generates one or more boxes", as defined here: https://wicg.github.io/layout-instability/#visual-representation

However, this may not be practical for implementations, and it seems to be inconsistent with how the feature is implemented in Chrome.

Consider for example this testcase, which has an element with visibility:hidden that gets shifted (via the insertion of another node before it): https://jsfiddle.net/yqr570ex/

In Chrome 84, this testcase does NOT trigger a reported Layout Shift right now, even though the visibility:hidden element definitely "generates a box". I've included a few other notable CSS classes that you can set on the container for the shifted element (the thing with id="giveMeAClass"). And when the shifted element's parent has...

I think both the spec & Chrome need to be updated to have a coherent story on which of these should be considered. The final one here (contain:paint and small nonzero size) makes sense, but the other three seem like they should be consistent & should be clearly defined by the spec; right now, the spec seems like it would call for them all to trigger layout shifts.

dholbert commented 4 years ago

CC @npm1 and @mattwoodrow

I suspect the current Chrome behavior on my testcase (e.g. the strange behavior-difference between visibility:hidden vs opacity:0) is just a result of Chrome's internal/not-intended-to-be-web-observable decisions about which sorts of boxes generate paintable things vs. which ones don't.

Ideally, I think the spec should allow UAs to exclude unpainted content from contributing to layout shift, because: (1) clearly it doesn't directly contribute to user-perceived layout shifts. (It may impact the layout of other content that is visible, but we can measure that visible content's layout-changes without needing to consider the unpainted content.) (2) it may impose some burden on UAs (and prevent them from implementing certain optimizations) if this spec were to require the UA to record & compare the positions & sizes of unpainted boxes.

Ideally, the spec should encode this allowance in such a way as to also allow for variations between browser rendering engines on what things they bother to consider painted vs. unpainted. It should also allow for the possibility of new CSS features (like contain:paint which is relatively recent) which can produce obviously-unpainted content that a rendering engine may want to exclude from the layout-shift calculation.

npm1 commented 4 years ago

Yea, painting invisible things is always tricky... In this case, I agree that one possible solution is to say that the above (plus potentially others) may be considered or ignored for the purposes of layout shift. Another possibility is that the above elements really shouldn't trigger layout shifts, and the fact that opacity:0 ones do would be a bug in the Chrome implementation (probably should add WPTs for this). But nonetheless due to the hairy nature of these kinds of elements we should clarify either way. Thanks for pointing out this problem and for the investigation into the Chrome behavior :)

skobes-chromium commented 4 years ago

Thanks for raising this. I think we should, at a minimum, update the spec to say that a node in a visibility:hidden subtree is not considered unstable. Probably we should also update the definition of visual representation to say that ancestor clips are applied (I think Chrome does this as part of its coordinate mapping).

For opacity:0 it is less clear that we can efficiently exclude it in all cases (e.g. compositing) and I would be ok with allowing the UA to decide.

dholbert commented 4 years ago

update the spec to say that a node in a visibility:hidden subtree is not considered unstable

Nit: for this hypothetical spec text, you probably really would want s/a visibility:hidden subtree/a node with visibility:hidden/

Subtrees aren't really relevant for visibility:hidden -- it's a decision that each node can make independently (it does inherit, but children can override the inherited value and show themselves). (But they are relevant for display:none, of course, since that nukes the subtree & the descendants have no say about that.)

I would be ok with allowing the UA to decide.

Cool - it would be great if there was some spec text (e.g. "For nodes that generate boxes that are not painted for whatever reason..." Not sure what the best spec terminology is to use there, but I think that would be the rough qualifier for this category.

npm1 commented 4 years ago

Sorry for the delay replying here, but coming back to this, is the proposal to allow user agent to decide either way for content affected by visibility: hidden or opacity: 0? In this case, presumably we need a normative note that states such allowance. It seems to me that a user would be unable to notice these shifts of invisible content, so I feel like we should only have some leeway if implementations are unable to track these.

dholbert commented 4 years ago

Sure, I think I'd be OK with leaving it up to UA discretion for those visibility and opacity cases.

One other loop to close here, though -- Chrome does also disagree with the spec on the "contain:paint and zero size" scenario. (The spec considers movement inside of such an element as contributing to shift, but Chrome does not, last time I checked.) That should probably either be considered a Chrome bug (and tracked as such), or it needs a spec exception of some sort (whether general or specific).

dholbert commented 4 years ago

One other update: it seems Chrome's behavior has changed here, such that all the cases in my fiddle (discussed in the original comment here) do all now report a layout shift.

I'm testing Chrome 87.0.4259.3 (Official Build) dev (64-bit) on Ubuntu 20.04.

npm1 commented 4 years ago

Hmm interesting, per your previous comment it seems that the latest change is more in line with the spec, even if in practice it looks like the results are less desirable. I filed https://bugs.chromium.org/p/chromium/issues/detail?id=1127474 to discuss but it sounds like the current behavior is spec compliant?

dholbert commented 4 years ago

Yes, I think Chrome's implementation has become spec complaint on the cases that I'd brought up (since all of these cases "generate boxes", which means their movements should be considered as contributing to layout shift, and now Chrome does seem to consider them.)

dholbert commented 4 years ago

(Still: I think I would prefer for the spec to allow for some flexibility on this - the approach that @mattwoodrow and I were envisioning would need to exclude some of these unpainted cases, at least. But perhaps this can fall under the umbrella of user agents "compromis[ing] precision in the interest of calculation efficiency".)

mattwoodrow commented 4 years ago

Are there any unexpected consequences of allowing different behaviour between UAs here?

For example, if an author optimizes their page using a UA that doesn't penalize invisible content, are there downsides for them still receiving a poor score in a different UA?

npm1 commented 4 years ago

(Still: I think I would prefer for the spec to allow for some flexibility on this - the approach that @mattwoodrow and I were envisioning would need to exclude some of these unpainted cases, at least. But perhaps this can fall under the umbrella of user agents "compromis[ing] precision in the interest of calculation efficiency".)

You mean flexibility on all of the testcases mentioned in the original issue, or which of them? The spec currently says all of them should trigger shifts. We could keep the existing text and add some text saying the user agent may choose to ignore shifts of content that they deem invisible? It's hand-wavy, so an alternative would be to decide which of those are important to make flexible and write something specific to those, WDYT?

Are there any unexpected consequences of allowing different behaviour between UAs here?

There wouldn't be downsides to the user experience given that they are invisible shifts. I'm more concerned about developers wasting time trying to fix invisible shifts or losing trust in the metric because it reports them.

dholbert commented 4 years ago

You mean flexibility on all of the testcases mentioned in the original issue, or which of them?

Some of them. In particular, I think visibility:hidden and opacity:0 both could be problematic for us to track, if we hook in at the point in the layout/paint pipeline that we were thinking about (namely, our display list).

We're not settled on a specific implementation approach or timeline yet, though, for what it's worth.

We could keep the existing text and add some text saying the user agent may choose to ignore shifts of content that they deem invisible?

I'm conflicted, but I think I like this.

(If this were a purely opt-in feature, I'd feel much more in favor of strictly defining the behavior, without regard for computational cost. But since it's not opt-in -- browsers must collect the first 150 samples for all pageloads per #52 -- that means we have to be careful about requiring any computations whose value is dubious.)

Are there any unexpected consequences of allowing different behaviour between UAs here? For example, if an author optimizes their page using a UA that doesn't penalize invisible content, are there downsides for them still receiving a poor score in a different UA?

This is a good point & is something that we'll definitely need to weigh whenever taking advantage of a "The UA may choose..." route.

Given that it seems this metric will impact search rankings, it's clear that web developers will largely have search ranking in mind, and they'll expect their browser to be at least as strict as the web-spidering-UA that's used for search ranking computations. Otherwise they might optimize to a threshold that they think is satisfactory (according to their local tools), only to find out later that they're still being penalized (according to the search engine that they're optimizing towards).

So for cases that web developers are likely to stumble across, I think we need to be very careful about diverging. But for more extreme edge cases where there's little user benefit to any particular behavior, I'd think we could diverge if it helps simplify the implementation.

MatsPalmgren commented 4 years ago

they'll expect their browser to be at least as strict as the web-spidering-UA that's used for search ranking computations

I expect Google will use a Blink-based UA to do that, which means that the Google Search business is favoring Chrome through its page ranking. It seems to me there is a feedback loop between Google Search and Chrome here that risks penalizing other browser vendors.

npm1 commented 4 years ago

Taking a step back, no one is trying to penalize any vendor. That would be a counter productive strategy for everyone involved. The goal is to help developers identify and improve bad user experiences on their pages. The fact that Google Search will include CLS as one of its many signals is, yes, an added incentive for site owners, but not the goal or reason for why we want to encourage developers to measure and improve their pages. To that end, we should optimize the metric to ensure that it both (a) best reflects the perceived user experience and (b) does not create unnecessary work for developers, i.e. strive for consistency between how we measure and what we surface. If this all sounds obvious then, great... that’s why we’re working together in this forum to come up with an interoperable and sensible solution.

Back to the matter at hand... in the case where invisible content shifts and is counted as a penalty, we’re failing both of the above criteria. If the shift is not visible to the user, we shouldn’t penalize the site. So for all of the above cases where we agree we can implement and track, we should specify that the invisible content should not produce a shift. For the remainder, we should leave it up to the user agent instead of specifying that invisible content should produce a shift.

zeman commented 4 years ago

If it's useful here's a very simple test case where opacity 0 leads to a layout shift but the user experience if actually fine. I'd vote for elements with opacity 0 to not cause layout shifts.

http://markzeman.com/tmp/opacity-0.html

cls-opacity
tannerhodges commented 3 years ago

Chrome has updated layout shifts to ignore opacity: 0 (link to commit).

Looks like it will be released in Chrome 89.

npm1 commented 3 years ago

That's right, we have updated the implementation with https://chromium-review.googlesource.com/c/chromium/src/+/2591907 as well as https://chromium-review.googlesource.com/c/chromium/src/+/2611555. We need to update the spec to reflect these changes.

npm1 commented 3 years ago

So looking at the initial report, I think we've clarified that some of the invisible cases should not be reported - this would be subject to debate later on if another implementor finds it too costly to keep track of. So we decided on all except the contain:paint cases. However I think that those are already covered by the spec. In the nonzero size case, the shift is reported despite not painting anything, as we do not compute any occlusion. In the zero size case, the area of the impact region is 0, hence the layout shift is 0 and no entry is dispatched as a result. Therefore I think the spec here is clear enough now. Does that make sense to all? If so, then we can close this issue.

skobes-chromium commented 3 years ago

In the nonzero size case, the shift is reported despite not painting anything

I think we may want to change this, as I mentioned on http://crbug.com/1099350. Should we leave this issue open for that, or file a new one?

npm1 commented 3 years ago

I suppose it depends on what you mean by empty. In the example, the div does contain content, it is just clipped by the contain:paint. Is that case going to be covered by the bug you listed?

dholbert commented 3 years ago

I think the newish opacity-related spec text here isn't quite correct.

This commit added some text that's conditioned on whether "...the computed value of the opacity property for N is not equal to 0", but I don't think that's the right condition. If you set opacity:0 on the root of some subtree, its descendants will still all have the default opacity:1 (or whatever opacity happens to be specified on them), despite nonetheless being unpainted, by virtue of being part of a fully-transparent subtree.

So when asking whether a particular node is painted, it's not sufficient to just reason about the node's own opacity value -- you have to check its ancestors as well. So I think the condition we care about here is more like: "...neither N nor any of its ancestors have a computed value of the opacity property that is equal to 0". (In practice, an implementation would achieve this by skipping an entire subtree from consideration, if the root of the subtree has opacity:0; there's no need to do any sort of walk-the-ancestors-of-every-node traversal, despite it being phrased that way.)

With that clarified, I think this issue could probably be closed, from my perspective. Thanks!

npm1 commented 3 years ago

Ah yea, I don't know why I thought that computed value would already account for inheritance. Will fix!