mozilla / standards-positions

https://mozilla.github.io/standards-positions/
Mozilla Public License 2.0
635 stars 69 forks source link

Largest Contentful Paint #191

Closed digitarald closed 2 years ago

digitarald commented 5 years ago

Request for Mozilla Position on an Emerging Web Specification

Other information

https://github.com/w3ctag/design-reviews/issues/378

RByers commented 4 years ago

Any thoughts on this yet? With Google's launch of web vitals, it would be great to better understand Mozilla's perspective on them.

RByers commented 4 years ago

Also see the resources on the chromium speed metrics page to get some more context on how these metrics were developed. We're more than happy to share data, discuss any feedback etc. I know this has been talked about a bunch at the WebPerf WG already, and @dbaron's tag feedback is great.

/cc @npm1

bdekoz commented 4 years ago

Hey Rick, we are still evaluating the web vitals bits that were discussed in W3C web perf at the beginning of June, including Largest Contentful Paint and how that fits in with the others deemed vital by Chrome. We're hoping to get more mobile data before taking a position in the near future.

npm1 commented 4 years ago

Hi Benjamin, I don't think we filed an issue for Layout Instability, and there is not one specific for FID (but there is one for the whole Event Timing). @skobes is filing one for LI. Do we want to keep the conversation for FID in the Event Timing one or should I file a separate one?

bdekoz commented 4 years ago

Keep it in Event Timing please

anniesullie commented 4 years ago

Hi Benjamin, you mentioned you're hoping to get more mobile data. Is that something we could help with? We did an analysis of over 4 million mobile sites on HttpArchive, showing that LCP correlates well with Speed Index and not much with other RUM metrics like FCP.

Please let me know if there is additional data we could collect that would help inform!

bdekoz commented 4 years ago

@npm1 to help us sort through web vitals, I made tracker issues for each metric after all.

FID: https://github.com/mozilla/standards-positions/issues/387 CLS: https://github.com/mozilla/standards-positions/issues/386

bdekoz commented 4 years ago

@anniesullie thanks for the HttpArchive link. Some of the internal analysis for LCP has been delayed due to recent events, and is not expected to be completed until the end of the month. I'll have more specific feedback then, but expect to recommend this as worth prototyping.

smaug---- commented 4 years ago

Given some concepts around Event timing and scroll handling being still unclear (spec issues filed), it is a bit hard to say how LCP should work.

sefeng211 commented 2 years ago

Are there still outstanding issues/concerns that are preventing us from making a decision? I think we are leaning towards a worth prototyping position as we consider LCP has a good correlation with SpeedIndex.

I can make a PR if there are no objections. @smaug---- @bdekoz @Bas-moz

annevk commented 2 years ago

@achristensen07, hey, curious if WebKit has had the opportunity to discuss this API. And if so, what would be your perspective?

achristensen07 commented 2 years ago

I understand that people want to measure and improve how long it takes for users to see most of their webpage, and I think this is an admirable goal. I'm not convinced that we have arrived at the metric that people are looking for, though. The spec currently says "The LargestContentfulPaint API is based on heuristics. As such, it is error prone." I agree with that statement, and TPAC notes also say concerning things about the current heuristics. Google's including this in web vitals has certainly made people care more about it, but it has also turned it into an SEO game with websites doing strange things to convince Google that they have a fast site. LCP's relationship with lazy image loading is also problematic.

RByers commented 2 years ago

but it has also turned it into an SEO game with websites doing strange things to convince Google that they have a fast site.

While there's always some aspect of an arms race with SEO, from the time spent working with performance consultants and data I've seen, I personally believe that this is not significant at the moment. In practice LCP seems to correlate quite well with user experience, but I don't expect you to trust Google's opinion on this. Instead Chrome's LCP data is available publicly in the CrUX report, so we welcome independent analyses quantifying the extent of such issues in practice, as well as proposals for alternatives or improvements that do a better job.

Or is your argument just "measuring user-perceived page load performance perfectly is hard so browsers shouldn't even really try"?

achristensen07 commented 2 years ago

I didn't say we shouldn't even really try. I said "I think this is an admirable goal." I also said that there are some issues with our current attempt at reaching that goal. That was based on comments from several parties at TPAC.

RByers commented 2 years ago

I said "I think this is an admirable goal."

Yes, thank you for that. Sorry for the snark.

I also said that there are some issues with our current attempt at reaching that goal. That was based on comments from several parties at TPAC.

It is indeed imperfect and probably always will be to some degree. How would you determine where the bar is for "good enough" to be supportive of? Is there an analysis we could do, or a set of P1 known issues which should be addressed?

anniesullie commented 2 years ago

Thanks for the feedback, @achristensen07! Some specific questions about it:

TPAC notes also say concerning things about the current heuristics

We'd love to work to address the concerns. We reviewed the notes from TPAC and filed issues 84, 85, and 86. Happy to follow up on discussion there; please file another issue if there's one we missed.

LCP's relationship with lazy image loading is also problematic.

Can you clarify what you mean here? This was discussed briefly at TPAC, but our understanding is that the problem is how lazy loading can be misused–loading the main image on the page late will delay the visual content from appearing, which would affect most visual page load metrics, LCP included.

achristensen07 commented 2 years ago

Rick, while I realize this is less useful for those who want to measure the entire internet without modification, I would be more in favor of implementing an API where the server gets to specify somehow what content it thinks is important to measure the timing of. That way, we would not need to have heuristics to guess what is in the background.

Annie, I thought I remembered someone saying that some people were turning off lazy loading of images to decrease their LCP time, but looking through the TPAC notes I think there are other ways to resolve this.

anniesullie commented 2 years ago

while I realize this is less useful for those who want to measure the entire internet without modification, I would be more in favor of implementing an API where the server gets to specify somehow what content it thinks is important to measure the timing of. That way, we would not need to have heuristics to guess what is in the background.

We purposefully built largest contentful paint on the Element Timing API so that the server could specify which content it thinks is important to measure the timing of. We'd love to see that available to developers more broadly as well!

What we see from the usage data of both APIs is that the Largest Contentful Paint API is appropriate for many more use cases than just measuring the entire internet. Even before Google Search announced its intention to use LCP as a ranking signal in May 2020, we saw that largest contentful paint was used on about 8% of page loads while Element Timing was used on about 0.2% of page loads. So while some performance-minded developers do find it useful to to specify which content to measure, we believe the majority of users prefer to have a drop-in heuristic. I think this makes sense when you think of it in the context of popularity of lab heuristics like speed index.