checkly / public-roadmap

Checkly public roadmap. All planned features, updates and tweaks.
https://checklyhq.com
37 stars 7 forks source link

Access to the complete Lighthouse metrics #79

Open dschmidtadv opened 4 years ago

dschmidtadv commented 4 years ago

💡 For general support requests and bug reports, please go to checklyhq.com/support

Is your feature request related to a problem? Please describe. We are not able to access Lighthouse report details from the API when executing browser checks.

Describe the solution you'd like We would like to be able to access Lighthouse report details so we can fail tests when performance degrades.

Describe alternatives you've considered Some data is available thru the performance.getEntriesByName object, we are using this as a workaround.

tnolet commented 4 years ago

@dschmidtadv thanks for reporting this. We will have a look if we can enable this. Our main concern is how we can make this a nice experience for the user. And extra insights on your use case would be very valuable.

coderkind commented 3 years ago

This would be a really valuable addition (I've noticed it being unavailable from the API recently), but I guess the issue is how verbose the JSON response might be if including that information.

You're able to console.log out accessibility info at the moment which is seemingly unavailable in the API otherwise, e.g.

const snapshot = await page.accessibility.snapshot();
console.log(JSON.stringify(snapshot, null, 2));
StanLindsey commented 3 years ago

Ah man I'd love this.

Though one option is just surfacing the information for us to use in our pupetteer scripts. A first party solution that integrates with dashboards/screenshots etc would be incredible.

E.g. the status pages feature v2 could include rolling averages for first paint etc. Pew pew.

tnolet commented 3 years ago

@StanLindsey @coderkind I hear you and we are looking into this! Our biggest concern regarding offering this service is the problem with "variability" See https://developers.google.com/web/tools/lighthouse/variability

This means we have to first find an infrastructure solution that gives dependable, stable metrics without making it ridiculously expensive: this is a problem with many services that run Lighthouse in Lambda or equivalent FaaS infrastructure.

coderkind commented 3 years ago

This means we have to first find an infrastructure solution that gives dependable, stable metrics without making it ridiculously expensive

@tnolet does the infrastructure need to be more dependable and stable than the one that currently allows you to use Puppeteer/Playwright to take screenshots? I appreciate there's variability between running Lighthouse tests (even run off a local machine).

Regarding cost; is there a top-level of functionality from Lighthouse you might expose (kinda like how you're just allowing Chromium in Playwright right now)? I see options in the npm docs to limit certain checks, e.g.

--only-audits
--only-categories
tnolet commented 3 years ago

This means we have to first find an infrastructure solution that gives dependable, stable metrics without making it ridiculously expensive

@tnolet does the infrastructure need to be more dependable and stable than the one that currently allows you to use Puppeteer/Playwright to take screenshots? I appreciate there's variability between running Lighthouse tests (even run off a local machine).

Regarding cost; is there a top-level of functionality from Lighthouse you might expose (kinda like how you're just allowing Chromium in Playwright right now)? I see options in the npm docs to limit certain checks, e.g.

--only-audits
--only-categories

@coderkind those are great suggestions and we are considering all options. The workloads are pretty different though, because of the strong emphasis on performance vs. the strong emphasis on functionality we have right now.

tnolet commented 3 years ago

@dschmidtadv @coderkind @StanLindsey we are taking more and more steps in the direction of supporting performance metrics. I'm sure you will be already somewhat satisfied with some features we are rolling out soon, but I would love to get your thoughts on "next steps" and how we can do better here. Would it be cool if I contact you for this for a short chat?

ZainVirani commented 2 years ago

@tnolet do you have any updates on the status of this work? I can see on https://www.checklyhq.com/docs/browser-checks/tracing-web-vitals/ that, for example, TTI (time to interactive) is missing. Potentially, TBT lacks context without a TTI measurement also.

Additionally, CLS is listed as one of the 5 metrics offered, however is also listed under a section dictating what cannot be measured. Let me know if I missed something here.

tnolet commented 2 years ago

@ZainVirani

1) we don't have the full Lighthouse tests available because we aren't currently set up to reliably and consistently get the full range of results. Lighthouse is very CPU and memory intensive and not recommended to run on the typical infrastructure we use right now.

2) we only measure TBT — which is an indicator for TTI — because TTI will always require user interaction, something we cannot 100% rely on to be part of your scripts. This is the reason TTI is more useful in a RUM situation (like Vercel provides) where actual users are interacting with your page. For a synthetic solution like ours, TBT is the more dependable

3) The section on CLS just addresses, that in some cases we cannot detect CLS. This just due to the nature of CLS being a measure over time. However, in the vast majority of the cases we can detect CLS. https://www.checklyhq.com/docs/browser-checks/tracing-web-vitals/#why-are-some-web-vitals-not-reported

Hope this helps!