GoogleChrome / lighthouse

Automated auditing, performance metrics, and best practices for the web.
https://developer.chrome.com/docs/lighthouse/overview/
Apache License 2.0
28.4k stars 9.38k forks source link

Possibly a bug, possibly just lack of info reported #12294

Closed getify closed 3 years ago

getify commented 3 years ago

I'm trying to audit a PWA I built for its Lighthouse scores, and I'm dismayed that one specific metric (LCP) seems to be abnormally much larger than the other metrics, and that doesn't match the reality as far as I can tell. When emulating mobile loading via Lighthouse in Chrome DevTools, my FCP is 1.5s and my TTI is 2.6s, but my LCP is 8.0s (and sometimes as much as 8.9s). This is really bringing down my score.

I've been trying to figure out what's causing Lighthouse to think I'm doing a "large content paint" this long after the page has loaded, because I'm not. Everything is displayed and static by about 2.5 seconds, and nothing changes after that (unless the user interacts). No shifts, to other large loading artifacts, etc.

The report in Chrome DevTools doesn't tell me what part of the app it thinks repainted content that many seconds later, but I'd love to figure out what's triggering this. Either there's something I don't understand about the report mechanisms, or it's not reporting the detail it should to tell me the problem, OR this is just a straight-up bug in Lighthouse.

Can you help me figure out what to do next to track this down?

brendankenny commented 3 years ago

We have a couple of efforts to improve the UX connecting the performance metrics to the various other parts of the report that can help diagnose what's going on with each of those metrics, which will hopefully make figuring out this sort of issue (or identifying Lighthouse bugs) easier in the future.

For now it's a somewhat manual process. Under Diagnostics there should be a "Largest Contentful Paint element" audit which should tell you what element was painted causing the LCP event and include a screenshot to help identify it. Is the information there helpful?

getify commented 3 years ago

I found the LCP down in the diagnostics section... but unfortunately, the thing it says took nearly 9s to paint absolutely does not. I can visually watch as the test runs, and the element paints within those first few seconds and never changes or moves after.

I thought the element it was complaining about was a different element that I animate in (as a usage hint on touch devices) after 5-6 seconds. I was imagining that was the element causing LCP abnormalities. But nope, that one isn't triggering.

The plugin is complaining about a static fixed-layout <h3> element that also has a background-image (the main app logo) in it.

So what next steps can I use to figure out why Lighthouse thinks this is painting (or re-painting?) so much later than it is?

connorjclark commented 3 years ago

Can you share the Lighthouse report? Even better, if you use the CLI with -GA options you could share the artifacts / traces too (found in latest_run); if we can't see a public URL then this is the next best thing.

brendankenny commented 3 years ago

If the Largest Contentful Paint element result doesn't lead to an obvious late-painting element, usually the DevTools performance panel is the next step to identify if something is changing on the page and thus changing the final LCP.

Or as @connorjclark says, if you can share the page url or lighthouse output, we may be able to track down the problem that way.

getify commented 3 years ago

Here's the public URL: https://flashmath.cards :)

adamraine commented 3 years ago

Everything is displayed and static by about 2.5 seconds, and nothing changes after that

Lighthouse uses simulated throttling by default, which loads the page normally and estimates what the metrics would be on a slower device after. This means the metric timings in the report will not match your direct observations of the page load. For more about this: https://github.com/GoogleChrome/lighthouse/blob/master/docs/throttling.md.

my LCP is 8.0s (and sometimes as much as 8.9s)

I cannot replicate an LCP this high on https://flashmath.cards/, but the LCP is still high (4-5s). The LCP appears to be targeting the correct element, Lighthouse gets the LCP timing from when this element reaches its final resting place after the sliding animation finishes.

@getify are you able to provide an example report, artifacts, and trace (see https://github.com/GoogleChrome/lighthouse/issues/12294#issuecomment-806141066 for how to collect artifacts / traces)?

getify commented 3 years ago

Lighthouse uses simulated throttling by default, which loads the page normally and estimates what the metrics would be on a slower device after.

Generally, I understand this sort of estimation/extrapolation from simulated throttling. However, the CSS animation runs for just 0.5 seconds from the DOM-ready event (or when the JS is running)... so to be getting 9s for LCP would mean it would have to be estimating almost 8.5 seconds for the single index.html file, a CSS file, a couple of images, and about 10-15 JS files to load. Total initial payload is well under 100kb gzipped (yes, the SW proactively caches the rest of the resources after page load).

But notably I'm not seeing that sort of inflated timing on any of the other metrics, such as FCP or TTI (both at or under 2s), only LCP. That's what confuses me.

If I turn on "slow 3g" throttling in the network panel, and load the site from empty cache, the index.html is fully loaded in 2.3 seconds, and the rest of the initial page (CSS, some images, and the JS) fully loads in about 8 more seconds (total of 10-11 seconds). But that's pretty much worst case. TBH, I don't think 11s worst-case slow 3g loading is that bad.

If I use "fast 3g" throttling (which seems more of a fair worst-case), the full page is loaded in under 3 seconds -- nowhere near the 8-9 seconds I'm seeing from Lighthouse.

Is LCP always pinned to worst-case, or does it average worst and best, or something like that?

are you able to provide an example report

I don't have the CLI for Lighthouse, but I just exported this "original trace" profile (JSON) from running Lighthouse in Chrome DevTools... not sure if that's what you're hoping to see, but maybe it's useful?

https://gist.github.com/getify/f847af67bcfbfe01822aae08c93df35b

getify commented 3 years ago

Also, though this isn't really the place to debate the validity of such performance metrics, it sure seems like LCP is unfairly hurting my site's "performance profile". Sorry for this being a little "off topic"...

I think there's a big difference between delays in showing first content, content shifting around as more content loads, and content coming completely to rest. When a site like mine is acting like an app (indeed, it's intended to be installed as a PWA), it doesn't seem unreasonable (or harming UX) to have visual animations that intentionally welcome the user to the experience.

If I show content, but then I move it around for a little while (via CSS animations), that's NOT the same thing as jarring layout shifts from progressively loading content, which are more random and disconcerting. In my app's case, there's a very controlled and nice transition for how the page "loads" and shows you the first card. So I think it's unfair to count the 0.5s I spend in that CSS animation against my LCP score, just because it's still moving into its final location.

The whole card's content is fully formed before it slides in, and that sort of thing ought to be taken into account, IMO.

Imagine if my site's logo was animated, like a little spinning halo above a letter, or something like that, but the logo just happened to be the largest element on the page... in that case, it seems like LCP would be infinitely long since the halo never stops spinning. But just because something is moving or animated doesn't mean the user isn't able to see, consume, interact with it.

And BTW, I'm not just obsessing over a hidden metric/score for academic reasons. I'm concerned that Google is now planning to use these scores for SEO ranking, so it matters if there's some aspect of the scoring algorithm that's unfortunately targeting content like mine.

Anyway... that's just my little OT mini-rant.

adamraine commented 3 years ago

Can you provide a trace after running Lighthouse with applied throttling? You can do this by unchecking the "Simulated throttling" box in the Lighthouse settings before testing the page. Then export the trace the same way you did before.

You can also export the Lighthouse report by clicking the three dots in the upper right corner and clicking "Save as JSON".

getify commented 3 years ago

Here's the "Lighthouse Report" export, with "simulated throttling" turned off: https://gist.github.com/getify/1983789b7ac5501ccdab226f8e0a4e3d

The performance score was much worse.

connorjclark commented 3 years ago

@getify What do you see as LCP when you run the Performance panel?

For me (no throttling) I get <1s. I ran it a couple dozen times ... once I got a far out h3 LCP of ~4s.

@adamraine is always seeing that the h3 LCP of ~4s.

We noticed that the LCP element is created with JS - can you share details about that? Or deploy a source map to help us debug?

(BTW you can enable element creation stacks in the Experiments section of settings panel. It's the second option. Close and reopen devtools, refresh page, and then select an element and open "Stack Trace" subpanel. if there is no stack trace that means the element existed on the original HTML)

connorjclark commented 3 years ago

image

From what I can tell, the third div here shows the "animating in" element (easier to see if you increase transition to 5s). I logged the LCP candidates and only ever see one LCP candidate, and it ends up on the second card. I'm guessing it's only created when the animation ends and the third card is marked hidden.

It's unexpected to me that there is only one LCP candidate at the end, I'd expect as it animates in the "next" card to have a candidate, but regardless that card is eventually marked hidden which would invalidate the candidate and push out LCP time.

adamraine commented 3 years ago

So I think there are a couple things happenings.

@anniesullie can the LCP event only fire when an element is first painted?

getify commented 3 years ago

I think you've correctly constructed how the animation and element creation works, but just let me clarify in case there are missing details.

On initial page load (and any time a new "card" should be shown), the content to be displayed in a card is pulled from an inline template (a <script> tag in the index.html file) and rendered (using Mustache.js), and injected into the DOM via innerHTML.

When a new card is to be slid in, here's what happens:

  1. The "current" card's content is copied to the "behind" card (so that it can show up whenever the new "current" card is flipped horizontally back and forth). This is done simply by setting .innerHTML.
  2. The new card's content is rendered from its template, and injected into the "next" card element (as noted above), which is hidden off screen (display:none and transform: translate(..)).
  3. That "next" card element is shown and animated down into place, exactly in the same spot as the "current" card element behind it.
  4. At the moment the sliding-in animation finishes, its content is copied from the "next" card to the "current" card behind it, and the "next" card element is re-hidden off-screen to re-use the next time.

This movement of DOM content means that I only ever have three card elements, but it creates the effect (through animation) as if I have an infinitely growing deck/stack of cards, but without having to create new card elements and discard/GC the old ones that are no longer needed.

getify commented 3 years ago

The LCP event does not fire for the <h3> element before the animation starts because it is not in the viewport.

I know that CLS isn't affected by CSS translates, but I was assuming that the <h3> element in question becomes an LCP candidate as soon as its visible, even if it's not in its final spot. In fairness, since the card slides downward, the <h3> isn't even visible until almost the end of the animation. I would have expected the lower elements in the card, like the "Challenge >>" or "Practice >>" buttons, which appear first (a couple hundred milliseconds before the <h3>) to have been the initial LCP candidates, and then switched to the bigger <h3> when it appears.

In any case, I think at worst, this means my LCP score is penalized the full 0.5s of animation as opposed to being able to register at the beginning of the animation and not be updated simply by moving to a new location.

Could I add some "element timing" attribute to the <h3> to force it to be counted as an LCP earlier? Would that reduce this particular part of the penalty?


I think the bigger issue I was confused by is still why certain metrics were happening very quickly (1-2s) whereas LCP was being pushed out to 4x that time or more?

I would have expected the "calibration" issue (around throttling) mentioned earlier as affecting all these metrics roughly proportionally, but LCP seemed to be an outlier. Even if we "solved" the 0.5s LCP addition, my Lighthouse testing seems like it still would have reported 7.5s for LCP as opposed to ~2s or less for the others.

If this is purely a calibration issue, I'm happy to chalk it mostly up to that. My chrome is just stock installed on a windows laptop, so I haven't done anything to it in terms of CPU calibration, that's just how it defaulted on Chrome's install.

It would be nice for the UI of the Lighthouse report to make the relationship of these different metrics, and their impact from simulated throttling, more clear. I appreciate the insight in this issue thread immensely, but I would have never guessed some of these details from what I was seeing.

connorjclark commented 3 years ago

There is no LCP candidate until the animation finishes.

new PerformanceObserver((entryList) => {
  for (const entry of entryList.getEntries()) {
    console.log('LCP candidate:', entry.startTime, entry);
  }
}).observe({type: 'largest-contentful-paint', buffered: true});

(increase the transition time so it's easier to see)

The only contentful element on the page is being animated in, and starts out of viewport. We think it's because LCP candidate is only considered on first paint, and on first paint that element is out of viewport. This may be a bug in the metric detection in Chrome, or it may be an unfortunate and necessary concession - @anniesullie may know more.

Regardless, even if Chrome detected some candidates as the animation occurred, the fact that that card get hidden and a new one takes it place I think means that LCP will always be at least 0.5s later than you'd think.

I think the bigger issue I was confused by is still why certain metrics were happening very quickly (1-2s) whereas LCP was being pushed out to 4x that time or more?

My guess is that since the LCP depends on JS to run, the network+CPU throttling has a heavier impact than FCP, which (for some reason) occurs earlier and w/o the need of JS (this is odd, it seems the card being animated in is considered a contentful paint, just not a possible LCP candidate.


Ignoring metrics for a moment and thinking about how the DOM is being manipulated here...

Can you think of a way to persist the DOM for the card being animated in, instead of hot-swapping it with a new bit of DOM?

I just mean step 4:

At the moment the sliding-in animation finishes, its content is copied from the "next" card to the "current" card behind it, and the "next" card element is re-hidden off-screen to re-use the next time.

Swapping the DOM like that could result in poor performance on poor devices (the browser is tossing out all the work it did to layout and paint that element at the end of the animation. could be janky?), I'd recommend test it if you happen to have a old-gen phone laying around. You could hook up a devtools session to it, run a performance trace, and see if there is a large task around the time the animation ends.


Could I add some "element timing" attribute to the

to force it to be counted as an LCP earlier? Would that reduce this particular part of the penalty?

Unfortunately no, that would make gaming the metric trivial.

For your own measurements you could listen to animation begin and track that, may possibly be useful to you if plugged into a RUM product.

anniesullie commented 3 years ago

@anniesullie can the LCP event only fire when an element is first painted?

I think so but adding @npm1 to clarify.

npm1 commented 3 years ago

Indeed, if you paint a text element outside of the viewport and then animate it in then it won't count as LCP because we'll record its first size as 0.

getify commented 3 years ago

Can you think of a way to persist the DOM for the card being animated in, instead of hot-swapping it with a new bit of DOM?

Fairly easy to do, but I think it creates more GC of DOM elements that way, which as I understand it can cause bigger performance issues on low-end devices. I might be able to create a rotation of the three DOM elements (behind, current, next) by switching their classnames around, which would avoid any innerHTML reassignment churn, and would (in theory) play nicer with LCP.

However, the downside is I think this would make the CSS more complex, as it requires me to do atomic changes to these classnames or the animations/transitions might go haywire. I might have to create intermediate "reset" classes to disable these transitions while I re-align classes. Unclear if the outcome benefit is worth this kind of complexity cost (and the extra frames).

FWIW, the most complex content in any card in this app is still like a dozen or so DOM elements, so the swapping of that content via innerHTML assignment doesn't seem like it would be much additional CPU burden even for low-end devices.

getify commented 3 years ago

Related question: is there a way for me, using the DOM APIs, to "move" the content from one card to another, not via innerHTML reassignment, which would prevent the LCP from resetting? I only used innerHTML for simplicity, but if there's a more appropriate way to move the content from one parent element to a different one, I'd be fine taking that approach.

connorjclark commented 3 years ago

Related question: is there a way for me

Yes: just move it where you want it to go https://stackoverflow.com/questions/7555442/move-an-element-to-another-parent-after-changing-its-id

I have no idea how this would effect LCP candidates ... that's a good question :) But this is def. the standard way to move DOM elements.

getify commented 3 years ago

Yeah, sorry, to clarify, I'm well aware of how to do it in the DOM... actually multiple different ways. I was asking if there was a way to do it that would address my LCP issue. :)

connorjclark commented 3 years ago

I used the code snippet shared above for logging LCP candidates on example.com, and moved the LCP element around in devtools, and no new candidates were made. Seems like it should just work. Just be sure to apply all the appropriate classes at the same time (in the same JS task), otherwise you may get an intermediate state where the node being moved is styled in such a way that the LCP element is no longer the largest.

getify commented 3 years ago

Based on suggestions here, I just made the minor tweak to do the shifting of content (from the slid-in "next" card to the "current" card) via appendChild(..) instead of innerHTML assignment. Turns out that was a pretty minor code change and only required changing a few lines of code. May not make any sort of huge difference, but if low-end phones find it easier to move DOM elements from parent-to-parent rather than re-create from HTML, that's a tiny win. Thanks for those suggestions!

Unfortunately, when I re-ran Ligthhouse, I didn't see any improvement/change in the LCP reported (this most recent time was 8.6s).

What do you see as LCP when you run the Performance panel?

If I'm interpreting this screenshot correctly, I believe the "Performance" panel is reporting about 1.2s for LCP, far below the 8.6s that Lighthouse reported for it.

LCP

I have not yet done any of the suggested CPU tuning yet, but I will pursue that soon if we don't find any other culprit/solution to pursue.

getify commented 3 years ago

Update... just made a tweak to my app to not init the service-worker (and thus not fire off its pre-caching of the rest of the site) until after the first card is fully shown (so... after the LCP should be finished).

That had a much larger impact on (improving) the LCP score than I would have expected. It went from 7.9-8.9s on average in my tests down to 3.1-3.4s. That's surprising.

I guess in some ways, this bug might now be resolved.


But in other ways, I still find it strange/confusing that the LCP is uniquely affected by the fact that, in throttled/slow internet, those other resources loading in the background via the SW might have taken much longer to complete. My app's LCP isn't actually dependent on that background loading, though clearly Lighthouse seems to assume so.

My app fires off the 0.5s animation around DOM-ready (or, at worst, onload), which happens when only the minimal set of resources (images, scripts, css) have been loaded for the welcome card to display. None of what the SW is doing in the background is necessary for that page load animation, so it seems unusual/unfair that the LCP computation is assuming such a heavy impact.

As far as I can tell, on a real (slow/old) device, the animation would run long before any of that background SW loading was completed, so its real-world LCP behavior would not have seen such a huge negative impact as Lighthouse is applying.

Not clear to me what, if anything, Lighthouse could change here. But it's certainly a confusing signal to me as an app developer.

anniesullie commented 3 years ago

LCP just measures when the largest image/text block is painted. Neither the metric nor lighthouse take service workers into account explicitly. So maybe the service worker init is really blocking the paint on lighthouse's throttled config?

getify commented 3 years ago

@anniesullie

I appreciate what you're explaining.

However, previous to my most recent change, the service-worker was not being initialized until after DOMready had fired, and after all my page's scripts had loaded. IOW, at the same time as I'm ready to begin the 0.5s of animation to show the welcome card, that's when I was also requesting to load the SW, no earlier. In addition, SW's are asynchronous, so even after it asynchronously loaded, in then started asynchronously loading all the additional assets of the site in the background. I just find it hard to believe that any of that asynchronous off-thread work could have actually been affecting the main thread animations so severely that it would slow down from ~3s to ~9s.

The only change I made was to move that requesting of the SW until after the animation completes, so delaying all that background stuff starting by ~0.5s.

What I imagine is happening, before my change, is that Lighthouse was seeing all those background resources being requested by the SW (without caring that it was a SW that requested them) and since the network throttling definitely is sensitive to so much network activity, it was really taking a long time to churn through all those in the background. Perhaps it makes assumptions that if resources are still loading, then it doesn't (in parallel) allow scripts to run or animations to fire? I dunno.

I'm trying to explain that in none of my testing, throttling or otherwise, did I ever see the animation actually delayed by that background activity in any meaningful way. Even when I did "slow 3g", the animations happened much earlier than the ~11s full page load time.

It was only Lighthouse profiling that seems to be sensitive to that background SW resource caching, and even then, it was only the LCP computation that ended up being affected. I don't know why that is, but it's the best I can ascertain from my available evidence.

anniesullie commented 3 years ago

Thanks @getify. Something is really strange here. Is it possible to get a trace from a lighthouse run with the service worker? I think it should have the info we need to understand what's happening (filmstrips, LCP events, service worker events, etc). I know you've put a lot of time into this already and made some changes to work around the issue, but it would be great if it were possible for us to get to the bottom of it!

getify commented 3 years ago

@anniesullie I had posted several traces earlier in this thread (from before I made that SW delay change). Did those have what was needed or not?

If I were to need to re-create this scenario now, I think I would have to re-deploy an old snapshot of the code to the live server. I have all that code in version control, so I can, but I'm not particularly anxious to disrupt the versions of files that live users are getting served. I can (and do) run a localhost emulation of the site for dev purposes, but that environment won't give us the real picture of how these files get loaded from a remote server.

I guess if the trace exports/reports I posted earlier aren't sufficient to see what we need to diagnose this, I could deploy the site on that same server at a different test-domain, but... that's work I'd rather not do if we don't need to.

anniesullie commented 3 years ago

@getify sorry I missed that! I looked at the trace here: https://gist.github.com/getify/f847af67bcfbfe01822aae08c93df35b

The metrics calculation inside the trace reports LCP as 1.2 seconds:

Screen Shot 2021-04-05 at 10

So I think your intuition is right, this looks like a difference in how Lighthouse processes the trace events compared to devtools+other tools. Handing it back over to @adamraine and @connorjclark (my team is the one that outputs the LCP events to the trace).

adamraine commented 3 years ago

@anniesullie that trace represents an unthrottled page load before Lighthouse interpolates the throttled metrics using simulated throttling.

This report (https://github.com/GoogleChrome/lighthouse/issues/12294#issuecomment-811282633) is from a Lighthouse run with applied (DevTools) throttling and it still reports an LCP of ~8s. Based on these results, simulated throttling was accurately predicting the LCP for @getify's environment. The problem is that I can't replicate the ~8s LCP reported from this environment.

@getify for us to truly diagnose what was causing your LCP to jump from ~3s to ~8s, you need to provide a full DevTools trace from a Lighthouse run with applied throttling where the LCP was 8-9s. Rolling back your SW changes would be best, but if the high LCP can be achieved through a test environment then it might still be helpful.

getify commented 3 years ago

Is this export (from this comment) not what you're asking for?

That's the first export I did up thread, and it was before I had made any code changes or done any disabling of throttling.

If that's not what you would have wanted, I'm not sure how to do so, because what I provided is what I thought is being asked for, so more specific steps/details would be helpful.

adamraine commented 3 years ago

That is a trace from a Lighthouse run with simulated throttling, and that was a correct profile to provide based on what we asked at the time. The process for retrieving the trace with applied throttling is the same, you just need to run Lighthouse with applied throttling:

  1. Open the Lighthouse panel
  2. Make sure "Simulated throttling" is unchecked in the Lighthouse settings
  3. Click "Generate report"
  4. Once the report is generated, click "View Trace" below the metrics
  5. Save the trace by clicking the "Save profile..." button
getify commented 3 years ago

What is the difference between "Applied Throttling" and "Simulated Throttling"?

The first export I did was with "simulated throttling" checked (the default). Then, per request, I unchecked that and did a second export. Your comment above says my second export had "applied throttling". So if the first report (with simulated throttling) and the second export (with, in your terms, has "applied throttling"), then what exactly different are you asking for in this third export?

Sorry, not trying to be difficult, I just don't understand, and I want to make sure I do it exactly correct if I'm going to go to the effort of re-deploying this older site version on a separate domain, etc.

adamraine commented 3 years ago

No problem. After reviewing my comment here https://github.com/GoogleChrome/lighthouse/issues/12294#issuecomment-811206260 I can see where your confusion is coming from. I should have clarified that the Lighthouse report and DevTools trace were different. To be clear:

Your second export was a Lighthouse report (in JSON format), not a DevTools trace. Sorry for the confusion.

What is the difference between "Applied Throttling" and "Simulated Throttling"?

Applied throttling slows down the page load using "Fast 3G" Network throttling and 4x CPU throttling slowdown. When you run Lighthouse with these settings, you will see the page load slower than normal and the metrics (LCP, FCP, etc.) will match the DevTools trace.

Simulated throttling loads the page normally, then interpolates the throttling after the page loads. This allows us to get results out much quicker, but the metrics in the DevTools trace will not match the metrics reported by Lighthouse.

For more details on this topic, please check out https://github.com/GoogleChrome/lighthouse/blob/master/docs/throttling.md.

getify commented 3 years ago

OK, I deployed an older version of the site's code (previous to the SW change) to the same server, but under a different domain ("test.getify.com").

I ran the Lighthouse test (with "simulated throttling" turned off), and then exported the "original trace" here: https://gist.github.com/getify/599bd7fff06c88b487f94d8263586196

I was perplexed to see that for this attempt (as opposed to earlier in the thread when I tried this process), the timings were MUCH MUCH worse, so much so that a few of them timeout errored. I don't understand why previously it fully completed (albeit in the 11-13s time range), and now the same code on a different domain is timing out after 20-30s. The code is the same, the server is the same, the only difference is the domain (and they're both on standard SSL certs).

In any case... I decided to just try loading the site (fresh after clearing all storage/SW) with "Fast 3g" throttling turned on in Network panel... and see if it worked or failed. It loads fine, around the 4-5s mark. I exported that HAR here: https://gist.github.com/getify/93bd76e06828a6fb0efee5c24a2bb043

So I'm completely stumped as to why normally loading the site (with Network panel throttling) works fine, and gives reasonable numbers, whereas trying to let Lighthouse do its own throttling causes such a terribly worse performance?

getify commented 3 years ago

For additional context, here's what the CWV report on Google Search Console says for my site's observed LCP:

google-search-console