gatsbyjs / gatsby

The best React-based framework with performance, scalability and security built in.
https://www.gatsbyjs.com
MIT License
55.27k stars 10.31k forks source link

Worse performance results with Lighthouse v6 (?) #24332

Closed kuworking closed 3 years ago

kuworking commented 4 years ago

Just wondering whether there is some information that could be of use here, since I've found in my sites a significant worsening of lighthouse results when comparing lighthouse v5.6 vs the new 6.0 (https://lighthouse-metrics.com/)

In a complex site (of mine) it goes (performance-wise) from a ~90 to ~50 In a simple starter (of mine) it lowers from ~98 to ~80

This doesn't happen in starters such as https://gatsby-starter-default-demo.netlify.app/ or https://gatsby-v2-perf.netlify.app/

But it does happen to www.gatsbyjs.org (from ~98 to ~73) or to https://theme-ui.com (from ~90 to ~75)

Since I spent some time achieving 98-100 punctuations in my code (which made me very happy), I kind of feel I don't have a lot of room for improvement (probably I do have), so I've thought I might ask here if there's something going on

Thanks

shanekenney commented 4 years ago

It looks like Lighthouse 6 introduces some new metrics and removes some others from v5 so a change in score is certainly likely. This article explains what has changed:

https://web.dev/lighthouse-whats-new-6.0/

There's also a link at the end to a score calculator which is really useful for breaking down the score and understanding what factors are contributing the most.

https://googlechrome.github.io/lighthouse/scorecalc

I get the impression there's more focus on main thread interactivity in v6 so if your site includes large JS bundles thats probably a significant factor.

kuworking commented 4 years ago

Yes @shanekenney , I'm aware, but don't really know how to reduce it apart from removing parts of the site to see what parts are provoking this

Do you also see the impact on gaysbyjs and theme-ui sites? I'm curious / would love to know what optimizations on their site they may be thinking about, or if they have spotted some specific cause

juanferreras commented 4 years ago

This issue is great so we can discuss overall Lighthouse / PageSpeed insights scores and the possible regressions we're seeing with v6.

@kuworking one thing worth noting is that lighthouse-metrics.com seems to use "Emulated Nexus 5X" for 5.6 and "Emulated Moto G4" for 6.0 which could also add some noise to the comparison.

This benchmark over 922 sites claims in v5 the median Performance score for a Gatsby site is 75. I'll try to do a quick view using hosted solutions to prevent my local network from being yet another variability factor.

Currently (with Lighthouse v5.6 / PageSpeed Insights)

PSI runs on a Moto G4 on "Fast 3G". Source

Certain "flag" sites built using Gatsby are not really performing great on PageSpeed Insights (which is still using Lighthouse v5.6 I assume – subject to standard variability on every run, possibly running 3x or 5x and averaging would weight in more reliable metrics).

However some other sites (and most starters) are performing very well on PageSpeed Insights:

The average TTI is noticeable – and while v6 changes the overall weight of TTI from 33% to 15% and dropped First CPU Idle, it does add TBT with a weight of 25% which could possibly explain a regression of scores generally speaking just due to overall JS parsing and execution.

Lighthouse v6 (with WebPageTest.org)

Here are the results, you can see the Lighthouse report by clicking on its number. I'm extracting the values from that report.

However, notice the regression on the following two sites:

Some of the open questions I have.

  1. Is the overall TTI (and TBT) explained by JS parsing + executing, or are there other factors harming interactivity?
  2. If so, could we be more aggressive (either on Gatsby by default such as latest changes like enabling granular chunks, or under some experimental flag) when building the chunks to only send what that first load needs (i.e. prevent the app-[hash].js from having excess)? It could also be simply documenting ways to play with extending webpack config with more guidance.
  3. Could patterns like Module/nomodule help decreasing chunks? Recommending/documenting usage of @loadable/components? Partial rehydration?
  4. This may be a second step towards pushing high scores, but since FMP is no longer a factor, is LQIP on gatsby-image helping or harming when it comes to LCP? LCP of store.gatsby.org on the run above was 4.7s!

(I'm using the links above just as examples – if anyone would like a certain link removed I can gladly edit the message)

me4502 commented 4 years ago

My site (https://matthewmiller.dev/) appears to have gotten better (~92 to ~95), but some of the new tests reveal a few things that could probably be improved.

The unnecessary JavaScript test for example, (First column is size, second column is amount that's unnecessary) image I assume this is due to items required for other pages being included here, so using something like loadable-components could help a bit.

kuworking commented 4 years ago

To me I'm having big difficulties in understanding Largest Contentful Paint, in the sense that I'm getting very large values without knowing why, and seeing a discrepancy between the value in the report (for example 7.4 s, and the LCP label that appears in the Performance - ViewTrace tab (~800 ms)

I can see that something similar seems to happen in the starter https://parmsang.github.io/gatsby-starter-ecommerce/

juanferreras commented 4 years ago

As an update – seems that PageSpeed Insights has soft launched the update to run Lighthouse v6 (it may not be in all regions yet).

gatsbyjs org lighthouse

Link to test https://gatsbyjs.org/. Currently getting results varying from low 60s to mid 80s, mainly depending on the values of TBT and TTI.

daydream05 commented 4 years ago

@kuworking there might be an issue with Lighthouse v6 recognizing gatsby-image.

According to webdev

For image elements that have been resized from their intrinsic size, the size that gets reported is either the visible size or the intrinsic size, whichever is smaller.

In my case, I think Lighthouse isn't respecting the view size.

Screen Shot 2020-05-29 at 6 30 22 PM

And here's the image in question

Screen Shot 2020-05-29 at 6 28 55 PM

It might be accounting for the intrinsic size which is 3000 pixels hence the 13s LCP for me.

Jimmydalecleveland commented 4 years ago

@daydream05 I had similar theories and findings as well so I tested my pages without images and still had a crazy long LCP (10-12sec). I have a lot going on in my project so it could be other variables as well, but I'm curious if you've tested a page with text content and no images yet.

dougwithseismic commented 4 years ago

Dropped from a 100 to 79~ https://dougsilkstone.com/ recently. Jumps up to 90when Google Tag Manager (and Google Analytics) are removed.

Will report back on more findings as I test things.

Edit: Hit 100 when removing typekit loaded font from gatsby-plugin-web-font-loader (also using preload-fonts cache).

Jimmydalecleveland commented 4 years ago

GTM is overall affecting my project a chunk but it isn't that drastic of a change when I remove it (5-10 points tops on sub 50s scores after LH6). I still need to do more testing but just wanted to throw that out there.

daydream05 commented 4 years ago

@Jimmydalecleveland interesting! I also have another site where the is screen i just text and it’s blaming the hero text as the main cause for LCP. And LCP only accounts for whatever is in view, which doesn’t make sense. How can be a text be that big of a problem.

@dougwithseismic I also use typekit and it’s def one of the major culprits for lower lighthouse scores. I wish there was a way to fix typekit since they dont support font-display

I think Lighthouse v6 is really tough on JS frameworks because on how they changed weighting of the scores. (More focus on blocking time) And Gatsby sites have historically low script evaluation/main thread scores due to rehydration and other things.

daydream05 commented 4 years ago

@dougwithseismic how did you link typekit font without using the stylesheet?

t2ca commented 4 years ago

I am having a similar experience, with lighthouse 5.7.1 my performance score was about 91, however lighthouse 6 has dramatically dropped my performance score to about 60.

ramojej commented 4 years ago

Dropped from a 100 to 79~ https://dougsilkstone.com/ recently. Jumps up to 90when Google Tag Manager (and Google Analytics) are removed.

Will report back on more findings as I test things.

Edit: Hit 100 when removing typekit loaded font from gatsby-plugin-web-font-loader (also using preload-fonts cache).

I don't even have these plugins installed, but my mobile score dropped from 90+ to 60 ~ 70+

Zellement commented 4 years ago

Same here. Massive drop from 90ish to 60ish on multiple sites.

dimadotspace commented 4 years ago

+1 drop of about 30+ points

michaeljwright commented 4 years ago

Is anyone addressing this? Seems like there is no point using Gatsby over Next if it doesn't deliver better scores out-the-box.

kuworking commented 4 years ago

Is anyone addressing this? Seems like there is no point using Gatsby over Next if it doesn't deliver better scores out-the-box.

Do you have any numbers from Next?

I am wondering whether these scores are the new normal for fast webs (that are not static, JS-free and likely also image-free)

cbdp commented 4 years ago

Do you have any numbers from Next?

https://nextjs.org/ has a score of 85, with both Largest Contentful Paint (3.8) and First Contentful Paint (2.8) being the main offenders. It also has a bunch of "Unused JS". That's down from ~95 in Lighthouse 5.7.1. It's "only" a drop of around 10 points, whereas gatsby sites seem to lose twice as many points.

I'm quite new to this world, but I'm following this issue after my gatsby site lost around 25 points when tested with lighthouse 6.0.0 from npm. Interestingly, if I'm using the pagespeed insights rather than npm lighthouse, my site goes back to around ~99. Whereas gatsbyjs.org gets ~70 on pagespeed insights, but ~84 with npm lighthouse. Something is probably being tweaked somewhere, I guess? All of them are getting 'Unused JS' warnings tho

kuworking commented 4 years ago

Is anyone addressing this? Seems like there is no point using Gatsby over Next if it doesn't deliver better scores out-the-box.

Do you have any numbers from Next? I am wondering whether these scores are the new normal for fast webs (that are not static, JS-free and likely also image-free)

A Next.js website -> https://masteringnextjs.com/ 77 mobile score. A lot of "Unused JS".

I see better scores with lighthouse-metrics https://lighthouse-metrics.com/one-time-tests/5edfbbb1cf858500080125f7

But I also don't see images there, and in my experience images seem to have a high (and legitimate IMO) impact

Yet, gatsbyjs.org neither has images and its score is (relatively) bad https://lighthouse-metrics.com/one-time-tests/5edfbc58cf858500080125ff (as compared with @cbdp example)

Let's see what gatsby devs think about this

dougwithseismic commented 4 years ago

With a few tweaks, site is back to top scores.

It seems to me like a case of Google moving the goal posts to be more strict about performance- notably FCP.

Our sites aren't slow by any means, moreso just being judged with different criteria. I'll help out on this one ✌️

On Tue, 9 Jun 2020, 18:48 kuworking, notifications@github.com wrote:

Is anyone addressing this? Seems like there is no point using Gatsby over Next if it doesn't deliver better scores out-the-box.

Do you have any numbers from Next? I am wondering whether these scores are the new normal for fast webs (that are not static, JS-free and likely also image-free)

A Next.js website -> https://masteringnextjs.com/ 77 mobile score. A lot of "Unused JS".

I see better scores with lighthouse-metrics https://lighthouse-metrics.com/one-time-tests/5edfbbb1cf858500080125f7

But I also don't see images there, and in my experience images seem to have a high (and legitimate IMO) impact

Yet, gatsbyjs.org neither has images and its score is (relatively) bad https://lighthouse-metrics.com/one-time-tests/5edfbc58cf858500080125ff (as compared with @cbdp https://github.com/cbdp example)

Let's see what gatsby devs think about this

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/gatsbyjs/gatsby/issues/24332#issuecomment-641433463, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALSIKRH4G74CRN2FNCUO4NDRVZRVNANCNFSM4NHP7XCA .

Pyrax commented 4 years ago

Just wanted to drop this useful calculator for comparing results from v6 with v5: https://googlechrome.github.io/lighthouse/scorecalc/

Lighthouse scores generally vary a lot, even when using it through PageSpeed Insights. For example, for https://www.gatsbyjs.org/ I received everything from 64 to 88 mobile performance after 5 runs. Hence, for tracking down this issue the calculator might be useful to see the consequences of different weights on the same run (note: as metrics are a little different, some values like FMP must be assumed using former measurements).

PS: Here I have a comparison of two runs from PageSpeed Insights for gatsbyjs.org: Run 1 - v6: 67 - v5: 85 Run 2 - v6: 78 - v5: 87 Biggest impact is caused by the new metric "Total Blocking Time" which is below a score of 70 in both runs and also has a weight of 25%.

harrygreen commented 4 years ago

Yep, to add to @Pyrax: LCP (Largest Contentful Paint) and TBT weigh 25% each in Lighthouse v6. So we focussed our efforts on addressing those. We found:

LCP

TBT

DannyHinshaw commented 4 years ago

This recent Lighthouse update seems to have just screwed everyone's perf scores, including their own:

Screen Shot 2020-06-10 at 7 03 53 AM

The only gatsby site of mine that hasn't really been obliterated is a site that's basically a single page and like 99% html. But even that one dropped about 5-10points.

I'm seeing the inverse of most people though, that is, Lighthouse in Chrome browser is still showing good scores for my site, but when ran on PageSpeed Insights it drops the perf score 20-30 points... maybe my Chrome Lighthouse version is behind? Chrome is latest, not sure how to check the built in Lighthouse version...

dylanblokhuis commented 4 years ago

This recent Lighthouse update seems to have just screwed everyone's perf scores, including their own:

Screen Shot 2020-06-10 at 7 03 53 AM

The only gatsby site of mine that hasn't really been obliterated is a site that's basically a single page and like 99% html. But even that one dropped about 5-10points.

I'm seeing the inverse of most people though, that is, Lighthouse in Chrome browser is still showing good scores for my site, but when ran on PageSpeed Insights it drops the perf score 20-30 points... maybe my Chrome Lighthouse version is behind? Chrome is latest, not sure how to check the built in Lighthouse version...

Lighthouse version is shown at the bottom of the audit. Screenshot 2020-06-10 at 13 13 57

DannyHinshaw commented 4 years ago

@dylanblokhuis ah, yep there it is. I'm on 5.7.1, is v6 not yet shipped in Chrome?

cbdp commented 4 years ago

@dylanblokhuis ah, yep there it is. I'm on 5.7.1, is v6 not yet shipped in Chrome?

It is not. Not yet anyway. If you want the latest, you can install it from npm and then run lighthouse https://yoursite.com --view and you'll get your score in the same format as you're used to with Chrome audit :)

Undistraction commented 4 years ago

For anyone else who's taken a big hit in scores, #24866 might also be relevant. There has been a seemingly pretty significant change to how Gatsby is handing chunking. Whilst the change definitely appears to make a lot of sense, for us at least, this change has resulted in code that was distributed across chunks being concentrated in commons and app chunks. Meaning a significantly bigger js load / parse.

The most concerning thing here is that these metrics are going to start impacting Page Rank relatively soon.

I've stripped out all third-party requests (Tag Manager/Typekit/Pixel/Analytics/ReCaptcha) and that's only giving a relatively small score boost, so something else is at play.

Also, for anyone looking to run Lighthouse 6 locally, it is available now in Chrome Canary and slated to ship to Chrome in July some time.

nandorojo commented 4 years ago

First: I got in touch with a Google engineer that's working on web.dev and asked about this. Not sure if that will lead to any greater explanation, but they seem to be intent on helping. I'll follow-up when I've managed to chat with them.


My performance scores went from 94-99 to 40-55. 😢

Largest Contentful Paint for my website mostly applies on pages with large images. For some reason, it's saying the images are taking like 14 seconds to load.

If you open any of the minimal Gatsby starter sites, any pages with images seem to be in the 70s max.

Here are the first two starters I saw with many images:

ghost.gatsby.org:

Screen Shot 2020-06-11 at 10 40 47 AM

gcn.netlify.app:

Screen Shot 2020-06-11 at 10 40 37 AM

However, the Gatsby starter blog has 98 performance (granted, it's a super minimal page with just some text):

Screen Shot 2020-06-11 at 10 55 05 AM

gatsbyjs.com:

image

Compare old scores to new scores in Chrome

You can still compare the old vs. new Lighthouse method scores without using the CLI. I find it useful to see what has changed.

View old Lighthouse tests

To view old Lighthouse scores, run the Lighthouse chrome extension from your chrome developer tools, instead of the browser toolbar.

Screen Shot 2020-06-11 at 11 03 41 AM

View new Lighthouse tests

Click the icon from your chrome extensions bar.

Screen Shot 2020-06-11 at 11 04 37 AM

My page changes

These are the two scores I have for the exact same page:

Old lighthouse (via Chrome dev tools)

Screen Shot 2020-06-11 at 10 56 22 AM

New lighthouse (via Chrome extension on the address bar)

Screen Shot 2020-06-11 at 10 58 02 AM

🤷‍♂️

kuworking commented 4 years ago

@nandorojo my impression with images is that emulation is done with a really slow connection and there, images do take a long time to be rendered

Since the option of removing images is not always possible, perhaps these 70's scores are the normal ones for this type of pages

And, the option of delaying their loading so that the user can start his interaction sooner, doesn't seem to do the trick (in my case)

wardpeet commented 4 years ago

Hey, sorry for the late answer. I've worked on Lighthouse, I'll try to explain as good as I can.

Chrome devs have published "Web Vitals", Essential metrics for a healthy site. It contains many metrics but the core ones are Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS). For tools like Lighthouse FID is swapped with Total Blocking Time (TBT).

Lighthouse v6 also takes these metrics in account and has shifted. This doesn't mean Gatsby is slow. It might be that some different optimizations are necessary.

This is how things changed: lighthouse metric change

If you want to know more about LCP you should checkout https://www.youtube.com/watch?v=diAc65p15ag.

So let's talk about Gatsby. Gatsby itself is still pretty fast and we're improving it even more. We're creating new API's so page builders like MDX, Contentful's rich text, ... can optimize the bundle as well. A lot can be done by optimizing your LCP. Make sure when using fonts & images, they aren't loaded lazily and are loaded as soon as possible. These assets should be loaded from the same origin as your site, they should not be loaded from a CDN.

Sadly TBT is a hard problem to solve and is something React doesn't optimize for. If you want to drop TBT, you should checkout preact. Preact has the same API as React but has a smaller javascript footprint. They do things differently but React components are compatible. You install it by running gatsby recipes preact.

Somethings I noticed when profiling gatsbyjs.com & gatsbyjs.org is that we should load google analytics, etc a bit later than we do now to make sure it doesn't become part of TBT.

If we look at .com by postponing analytics and GTM and making sure fonts load faster we can already see an improvement of +17. If we add preact into the mix we see +6. .com metrics

We can do the same for .org, we start at a score of 63. With some optimization of LCP and TBT we can get to 75. .org metrics

I'm not sure what we should do with this issue. I feel we can close it as there is not much else we can do here. What do you all think?

Jimmydalecleveland commented 4 years ago

@wardpeet Ty for the extra insight.

We have been digging into this matter a lot on a big Gatsby project we have that uses Contentful and will be used across multiple sites for us (Gatsby themes are awesome). I'll share a few findings in case they are helpful to anyone else looking at this.

  1. We have a situation that might not be super common, but I have seen it enough to believe it isn't that unique either, where we had to use useStaticQuery to grab images coming from Contentful and .find one by the identifier. We always knew this was wrong but were not noticeably punished for it until the scale of the site grew to have 300+ images and LH6 came about and smacked us.

The reason for this is because the images are part of Rich Text embeds, and we cannot graphql for them at the page query level (it's essentially a json field that Contentful has packages to parse). When using Webpack bundle analyzer, we noticed a massive JSON file (about 720 KB) and tracked it down to be the data from that query, which was grouped into a template we use for most pages by Webpack. This meant that every user visiting our site was downloading it as part of the chunk for the whole page template, regardless of the page using any images or not.

Big woopsie on our part, but if anyone else is doing large static queries (which you of course cannot pass parameters to in order to shrink the size) make sure you watch out for those situations and keep an eye on your bundle chunks.

  1. We had some success just today by using the loading prop for Gatsby image on images that are above the fold (Hero images for us). We've been trying to work on Largest Contentful Paint and this has yielded good results in some initial tests. There is an important part I almost missed to this: If you set loading="eager" for your topmost image, you might want to set fadeIn={false} as well for that image because the transition between the base64 and fully loaded image takes time which delays LCP.

Here is the props documentation I'm referring to and the note about fadeIn is at the bottom: https://www.gatsbyjs.org/packages/gatsby-image/#gatsby-image-props

I'd like to share screenshots but I don't know if I'm allowed to, sorry. If you use Chrome devtools and look at the performance panel, you are given nice little tags under the "timings" row for FP, FCP, FMP and LCP. When we switched to critically loading the first image we not only saw ~8-10 performance score increase but you can see the LCP tag loads immediately after FMP instead a second or so later in my case.

Hope that helps anyone troubleshooting this, and thanks to everyone who has responded so far.

treyles commented 4 years ago

Somethings I noticed when profiling gatsbyjs.com & gatsbyjs.org is that we should load google analytics, etc a bit later than we do know to make sure it doesn't become part of TBT.

@wardpeet how are you postponing analytics and GTM?

Undistraction commented 4 years ago

@wardpeet thanks for your reply. It is useful. Perhaps the best output from this issue would be some documentation outlining how to optimise for each of the metrics in the new Lighthouse. I am confident that our site feels fast to users and that Gatsby itself is doing a great job of optimising the site for real users. However if Google's web vitals are going to start informing page rank, getting a good lighthouse score is going to become mission-critical for most sites.

@Jimmydalecleveland we had a similar problem where we were needed to load in all the items of a resource so we could use data from within markdown to configure a filtwr (i.e. we couldn't filter using GraphQL) and optimised by using different fragments (a much smaller subset of fields) when loading a full resource vs when loading all resources for filtering. This greatly reduced our by JSON and therefore our bundle size.

@treyles you need to be careful delaying the load of Analytics as it can mean your stats are incomplete. For example it can mean your reported bounce-rate is not accurate. There are some scripts that marketing would not allow us to delay (pixel, analytics, hotjar and therefore tag manager), but others, e.g. Intercom are fine and are a worthwhile optimisation. In terms of how to delay these scripts, the scripts supplied by third-parties usually load async, but this alone is not enough. What you will probably need to do is replace these scripts with your own. Listen for window.load, then trigger the download. You need to be careful though as some scripts rely on window.load to initialise, and if you've used it to load the script, it will not fire again, so you need to initialise them manually. For example with Intercom we:

daydream05 commented 4 years ago

@wardpeet thanks for the very useful insight!

Regarding this solution:

Make sure when using fonts & images, they aren't loaded lazily and are loaded as soon as possible. These assets should be loaded from the same origin as your site, they should not be loaded from a CDN.

Wouldn't this go against how gatsby image works? Also, most CMSs handle the image transformation on the server and hosted in their own CDN. (Which is a good thing, imo). But if we host it in our own site, wouldn't this be counterproductive as well?

Adding to what @Undistraction said, Gatsby is fast but if it's not fast according to Google's eyes then it becomes problematic. Especially that they're including this in the page ranking update next year.

@Jimmydalecleveland I found a way to work with gatsby image inside contentful's rich text without that query hack! Here's the gist. The code was copy pasted from gatsby-source-contentful. Basically you can generate the contentful fluid or fixed props outside of GQL. Which is perfect for contentful's rich text.

I also created a pull request so we can access the APIs directly from gatsby-source-contentful.

t2ca commented 4 years ago

Something just doesn't add up for me. I built a very simplistic website with about an image per page, Im using SVG for images without gatsby-image, I also tried removing google analytics and that didn't make much difference, my score was about 50 - 60 for performance.

Something that is really puzzling for me is that only the home page (index.js) is getting the very low score, while other pages like the services page or the contact page are getting a score of ~80. I built this site fairly consistent and so there is not a tremendous difference between pages and yet for some reason the home page has a score of ~50 while the services pages has a score of ~80.

Like i mentioned earlier, with lighthouse v5, the score was ~90, it just makes no sense at all that a simple site like this would now have a low score of ~50.

KyleAMathews commented 4 years ago

Btw, have any of you tried setting the above-the-fold image as eager? This disables lazy loading and might increase the score. The blur or svg loading effects might be confusing Lighthouse (which if that's the case is a flaw in their algorithm).

On Sat, Jun 13, 2020, 10:48 AM t2ca notifications@github.com wrote:

Something just doesn't add up for me. I built a very simplistic website with about an image per page, Im using SVG for images without gatsby-image, I also tried removing google analytics and that didn't make much difference, my score was about 50 - 60 for performance.

Something that is really puzzling for me is that only the home page (index.js) is getting the very low score, while other pages like the services page or the contact page are getting a score of ~80. I built this site fairly consistent and so there is not a tremendous difference between pages and yet for some reason the home page has a score of ~50 while the services pages has a score of ~80.

Like i mentioned earlier, with lighthouse v5, the score was ~90, it just makes no sense at all that a simple site like this would now have a low score of ~50.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/gatsbyjs/gatsby/issues/24332#issuecomment-643648423, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAARLB2Q2IVSNVKGGBZ3ZPDRWOUU5ANCNFSM4NHP7XCA .

Jimmydalecleveland commented 4 years ago

@KyleAMathews We have, and it made a significant increase in performance score and first paints. It is what I outlined as point 2 in my lengthy comment above. Cancelling the fadeIn is what finally made LH happy.

Edit: I, likely ignorantly, feel like the focus on LCP is not the correct approach to universally take with concern to images. Obviously anecdotal, but I feel that a website feels much faster when all the content is loaded and the images are faded in afterwords, unless the image is crucial to the content.

One common example would be a Medium article. Sure, you could say that is a design flaw, but most Medium articles (and many other blogs) start with a big ol' image at the top that is just for mood creation or scenery and I don't care if it lazy loads in.

nandorojo commented 4 years ago

Btw, have any of you tried setting the above-the-fold image as eager? This disables lazy loading and might increase the score. The blur or svg loading effects might be confusing Lighthouse (which if that's the case is a flaw in their algorithm).

I’ll try this now.

nandorojo commented 4 years ago

I think I made some good progress here. I got my scores up from 57 to 84 with very basic changes. My LCP went from 12s to 2s.

That said, it is inconsistent. Since making the changes I'll describe below, my score varies from 69 - 84. There's clearly some random variance to the performance scores.

TLDR

First, like @KyleAMathews and @Jimmydalecleveland suggested, I tried setting loading="eager" and fadeIn={false} on my gatsby image components that were above the fold.

Next, I got rid of base64 from my queries.

These made a huge difference.

The good


My query looks like this:


localFile {
  childImageSharp {
      fluid(maxWidth: 800, quality: 100) {
        ...GatsbyImageSharpFluid_withWebp_noBase64
      }
   }
}

And my gatsby-image looks like this:

<Image 
  fluid={localFile.childImageSharp.fluid}
  fadeIn={false} 
  loading="eager"
/>

The less good

My UX on my website now looks much worse. The base64 + fade in provided a great UX. Now, it's a bit choppy. I guess that's a trade-off we have to consider now?

Before & after eager & fadeIn={false}

Here are some side-by-side comparisons of the exact same pages. The only difference is that on the right, the images have loading="eager" and fadeIn={false}.

1. Home page

Screen Shot 2020-06-13 at 3 38 43 PM

LCP down 49%. Performance score up 6 points.


2. Product Page

Screen Shot 2020-06-13 at 3 40 01 PM

LCP down 46%. Performance score up 7 points.

What's weird about this example above: the screenshots on the left have the default gatsby-image behavior (they do fade in, and they don't have eager on.) And yet, even though the performance score is lower, the small screenshots at the bottom make it look like it's loading in faster than the image to the right.

Maybe it's within the margin of error for how they judge performance, or maybe it's a bug on their end related to the fade in effect, as @KyleAMathews mentioned.


After setting _noBase64 in image fragments

Here are the same screens as the example above. They all have loading="eager", fadeIn={false} props on Gatsby Image. Also, the image fragments in the graqhql are GatsbyImageSharpFluid_withWebp_noBase64

It's a bit inexplicable, but I'm running a lighthouse test on the exact same page over and over, and got 84, 75, and 69.

Kinda weird, but in any case, it brought my score up.

Screen Shot 2020-06-13 at 3 52 03 PM

I think the Lighthouse algorithm was feeling unusually generous here lol ^


Screen Shot 2020-06-13 at 4 09 09 PM Screen Shot 2020-06-13 at 4 07 10 PM
t2ca commented 4 years ago

After further investigation, I had discovered that lighthouse was complaining about a specific element that was impacting the LCP score.

All I did was simply move this element which is just a paragraph and my score jumped above 80. Go figure. Not exactly sure why moving a paragraph increased my score from ~50 to ~80.

t2-media-lighthouse-score

Jimmydalecleveland commented 4 years ago

@nandorojo Thanks for the thorough write-up. We haven't tried removing base64 completely, but would be a bummer if we had to. We also only put eager loading on the first image of the page, so if you aren't already doing that it's worth a try if you can control that.

michaeljwright commented 4 years ago

After further investigation, I had discovered that lighthouse was complaining about a specific element that was impacting the LCP score.

All I did was simply move this element which is just a paragraph and my score jumped above 80. Go figure. Not exactly sure why moving a paragraph increased my score from ~50 to ~80.

t2-media-lighthouse-score

@t2ca This is what I got (albeit mine was a header tag). But where did you move it to?

t2ca commented 4 years ago

@t2ca This is what I got (albeit mine was a header tag). But where did you move it to?

@michaeljwright The first thing I did was to delete the paragraph and check the lighthouse score. After I removed the paragraph my score increased about 20 points. I repeated the test many times just to make sure. I also put the paragraph back and did further tests and my sore was lower once again.

Finally, I decided just to move the paragraph, Im using framer-motion inside a div and I just moved the paragraph outside of the div. This gives me the same result just like when i deleted the paragraph.

daydream05 commented 4 years ago

@t2ca I think LCP penalizes any animations in our hero pages which is a bummer.

Here's my LCP scores where paragraph tag is the LCP

With animation:

Screen Shot 2020-06-16 at 1 08 09 PM

Without animation:

Screen Shot 2020-06-16 at 1 08 24 PM
t2ca commented 4 years ago

@t2ca I think LCP penalizes any animations in our hero pages which is a bummer.

Here's my LCP scores where paragraph tag is the LCP

With animation:

Screen Shot 2020-06-16 at 1 08 09 PM

Without animation:

Screen Shot 2020-06-16 at 1 08 24 PM

@daydream05 Thank you for confirming!

wardpeet commented 4 years ago

@daydream05

Wouldn't this go against how gatsby image works? Also, most CMSs handle the image transformation on the server and hosted in their own CDN. (Which is a good thing, imo). But if we host it in our own site, wouldn't this be counterproductive as well?

No, because gatsby-image works with local images too, no need to host it on a different CDN. It all comes down to optimizing your first render (what's in the viewport). Hosting it on a different domain/CDN means you have to open up a new connection (dns resolve, tls handshake, ...) this can take up to 300ms on a slow device and then you still have to download your image.

Adding to what @Undistraction said, Gatsby is fast but if it's not fast according to Google's eyes then it becomes problematic. Especially that they're including this in the page ranking update next year.

We'll be optimizing Gatsby even more to make sure our users can get 100's for free.

@t2ca I think LCP penalizes any animations in our hero pages which is a bummer.

That's expected because your screen never stops painting. Normally LCP should ignore CSS animations, but it depends on how you do the animations.

@t2ca

If you can show us the site, we can help to figure out how to improve it, but it's probably setting the image to eager.

@nandorojo

Awesome writeup! Any chance you can give us links to those lighthouse reports?

DannyHinshaw commented 4 years ago

That's expected because your screen never stops painting...

@wardpeet would you mind expanding on this please?

kuworking commented 4 years ago

@DannyHinshaw I received this explanation from lighthouse "What I think is going on is that LCP does care about images being fully loaded and the time that's reported is when the image is completely loaded and not when it is first visible. This time can be different due to progressive images and iterative paints."

And then this link, perhaps of help https://web.dev/lcp/#when-is-largest-contentful-paint-reported

dimadotspace commented 4 years ago

In the meantime what you can also try is changing your Largest Contentful Paint (LCP) from an image to text (if you have the luxury), preloading/prefetching fonts and lazy loading the images. In my case that meant reducing the size of the hero image on mobile which boosted our score back into the upper 90's while the issue is being discussed.

image image
import semiBoldFont from 'static/fonts/SemiBold-Web.woff2';
...
<Helmet>
   <link rel="prefetch" href={semiBoldFont} as="font"></link>
</Helmet>

https://lighthouse-dot-webdotdevsite.appspot.com//lh/html?url=https%3A%2F%2Fwhatsmypayment.com%2F https://developer.mozilla.org/en-US/docs/Web/HTML/Preloading_content