Open adamsilverstein opened 3 years ago
I love this idea - highlighting performance and potential issues as part of the content creation flow makes sense.
unpublished posts could be previewed with a temporary token, or left off
This is a great consideration - I think it would be vital to include unpublished posts, since highlighting performance problems may be most impactful when done before publishing. We could go down the route of a temporary token (like Public Post Preview), but I'd be worried about all posts having a public version around even when not published - for 99% it's probably fine to have a version public that is only available through a temporary token, but there is probably some sensitive content out there or sites for which this would be a dealbreaker.
For the above reason, maybe it would be more appropriate to use Lighthouse client-side instead of through e.g. PageSpeed Insights API which requires a public URL?
For the above reason, maybe it would be more appropriate to use Lighthouse client-side instead of through e.g. PageSpeed Insights API which requires a public URL?
Programmatic access doesn't seem to involve running in the browser, does it? Since it's not feasible to expect Chrome to be installable as an executable on the server, any such analysis would have to be performed in the user's own browser, right?
Using lighthouse client-side would be problematic because well... lighthouse is not available by default in all browsers. Perhaps it would be possible to use the PerformanceNavigationTiming API? It's supported in all modern browsers so it should be possible to use it for some basic performance measurements...
Yes. Certain aspects of Lighthouse could be implemented client-side. For example PerformanceObserver
can be used to determine:
Calculating FID may not be practical since it requires user interaction and this is best done in the field, however. So instead it could report Total Blocking Time (TBT) since it is lab-measureable (although I didn't immediately find a PerformanceObserver
example).
Showing these metrics could be a nice first step. It could be done by loading up a preview of the post in an iframe and obtaining the CWV metrics to display in the pre-publish panel.
Nevertheless, having metrics alone would not be super helpful since it wouldn't given them any way to action on the results to improve their scores. In the context of Gutenberg for this to be helpful I think it would depend on correlating which blocks in the content are negatively impacting CWV, and then directing the user to those blocks so that they may consider using something different if possible.
I will note that this is an active area of research for the the AMP plugin team. While up until now the AMP plugin has been attributing AMP validation errors to blocks in the editor, we are expanding to more general PX analyses given that the focus on “AMP validity” is lessening with development of Bento.
When using PerformanceObserver
client-side, the following limitations should be kept in mind:
It is therefore debatable, how reliable the extracted measurements actually are.
Can an iframe be configured in such a way that it counters some of these limitations?
the audit is done on a logged-in user, which is different than what an anonymous visitor would get
For one thing, the iframed page could be loaded with a query parameter to nullify the logged-in user. That will prevent the admin bar from being displayed and will make it look like a normal user accessing the page.
the audit measures the entire Chrome process & context, which means it measures not only the page, but all the Chrome extensions and all the Chrome background processes at the same time
True, but this could actually be a good thing. Visitors will also have Chrome extensions and background processes running, so perhaps PerformanceObserver
could reflect performance of what a user may experience when they don't have just a single Chrome tab open loading just that one website.
the audit depends on your local machine's resources and network stack (so someone from India has completely different metrics than someone from the US, depending on where the server is)
This one is tricky. Not only would the network connection not be throttled during the audit, but the cache would also most likely be primed. The only way I can think of to simulate accessing the page as a first-time visitor on a poor network would be to use a service worker to intercept all requests. Another possibility would be to inject random numbers into the URLs for all page assets. But this may be overkill and perhaps insufficient.
For any CWV information being surfaced, I think instead of following the thresholds determined by Google, we could instead consider more of a pass/fail scheme or error/warning/info.
For example, PerformanceObserver
will report layout shifts of elements on a page regardless of the connection speed or cache state. For first time visitors, the layout shifts will occur over a longer period of time, while for returning visitors they will be shorter (due to caches). In both cases, a layout shift will happen, even if for the returning visitor the layout shift may be less perceptible because it happens right after the page loads. Nevertheless, we can still capture the element that had a layout shift and depending on how much shift there is, mark the element as being either a warning or an error.
Analyzing LCP and TBT are more difficult due to the user's primed cached. For them, instead of using PerformanceObserver
it may be better to do a DOM analysis to check for red flags. For example, if a block causes a script to be printed which doesn't have async
/defer
then this could be a warning related to TBT. For LCP, if a block has an image which lacks responsive sizes this could be a warning for LCP.
It could be worth using: https://www.npmjs.com/package/web-vitals - it already has a built in API for sending data to an endpoint or dashboard. I think all the points above make a ton of sense and they are all valid, but maybe we're a bit skewed on context.
As someone mentioned, the resources available to your machine, bandwidth and all that can have a serious effect on scoring. Lighthouse, ideally should be used to diagnose performance pitfalls, not measure metrics.
Perhaps, it's better to capture vital data here and place it in context of a CrUX data? WebPageTest and PSI have started providing signals that are relative to CrUX data and it's far more useful (and easier to digest). At the end of the day, thats going to set you a part right? When your page speed signals are competing with sites around the world.
Its also important, that whatever shape or form this data is presented is communicated in a manner that does not diverge away from the guiding principles of Web Vitals:
"Site owners should not have to be performance gurus in order to understand the quality of experience they are delivering to their users. The Web Vitals initiative aims to simplify the landscape, and help sites focus on the metrics that matter most, the Core Web Vitals."
What problem does this address?
When publishing or updating a post, users check the preview screen to see how their page will look. If might help users if they could get a sense of how their page would perform.
What is your proposed solution?
Questions
Does this type of feature belong in a plugin? I would love to hear from the project maintainers if they think this type of feature can be built into Gutenberg directly or is better served by plugins? I believe existing filters would provide everything a plugin would need to add such a feature.