10up / wp-scaffold

10up WordPress project scaffold.
MIT License
206 stars 48 forks source link

Automated Performance Monitoring #58

Open joesnellpdx opened 3 years ago

joesnellpdx commented 3 years ago

Enable automated performance monitoring in the scaffold

One option is to utilize Perfume.js, but others should be explored.

Perfume.js tutorial

Alternatives? - needs exploration

xavortm commented 3 years ago

The initial exploration of perfume (and others) I did was to make it work with GA and report previous data, but that didn't work out well for me (not enough GA experience I believe...) but adding this to the scaffold would require the GTM script and instance to be connected, which I doubt is the idea here; so it's about the reporting to the developer on the spot. Some notes on that:

The alternative I can see would also be https://www.npmjs.com/package/web-vitals as it would report the metrics that are often related to SEO performance and include a lot of what Perfume.js measures, but has larger support/developer.

Looking at its docs, it can also send data to GA to track, which can be useful at times, but doesn't work "out of the box", it requires tweaks to the reporting/measuring in GA.

While researching I also found and tested https://www.npmjs.com/package/sitespeed.io which seems to work out of the box quickly and provides result like these: screenshot and stores them in the repository, which can be in git ignore.

What is sitespeed.io good for? It is usually used in two different areas:

  • Running in your continuous integration to find web performance regressions early: on commits or when you move code to your test environment
  • Monitoring your performance in production, alerting on regressions.

And it's also possible to setup a dashboard to show the results and previous measurements with docker - https://www.sitespeed.io/documentation/sitespeed.io/performance-dashboard/ There is cost breakdown near the end of the page if this is hosted on AWS, but local testing is also doable as shown in the example above.

— So I think that perfume.js and web-vitals npm package combined with console reporting could help in development, but in my experience, I rarely needed to look at this all the time. in fact, I only run tests manually when I am fixing performance issues. For the monitoring, dashboard reports with previous data would be better as they can pinpoint commits that create regressions (or content changes of course)

Aside thinking - measurement is best done on more than one page, which is a config the developer might have to set. If we add it to the scaffold, I see a good benefit from it as report pre-commit and notice if a metric is lower by a given threshold. I wonder however if this is more of a 10up-toolkit area?

joesnellpdx commented 3 years ago

@dainemawer ... see @xavortm 's comments above.

rdimascio commented 2 years ago

I think we need to refine what we mean by performance monitoring in the context of the scaffold. While I think setting up Perfume.js w/ GA or SiteSpeed w/ Docker/Grafana are great options for remote performance monitoring, I feel like they are only going to be used on a small percentage of projects. And performance monitoring in a local environment during development is rarely reflective of performance in the real world.

joesnellpdx commented 2 years ago

@rdimascio @xavortm

I agree, we need to define this a little more clearly.

We do have remote performance monitoring as part of support monitor - see Web Vitals.

And, we'd also like mechanisms to help measure in flight performance budgets and raise flags when pertinent.

They key for me, what tooling can we add to best enhance any current initiatives or goals from @dainemawer so support him and, therefore, support our team efforts.

Daine, what are your thoughts? What would be your goal or desired deliverable here?

dainemawer commented 2 years ago

Agreed, this ticket does require some more clarity. So here it goes!

Web Vitals Web Vitals is a great library that was released around about the same time as the initial release of the Core Web Vitals initiative. It is a tiny library (1KB), which is great, however it is specifically focused on tracking metrics on specific web vitals (LCP, FID, CLS, TTFB, FCP). The other upside the library has is that it provides engineers a way to send data to analytics platforms.

Let me say before venturing further, that there is absolutely nothing wrong with using this library. Implementation / installation is very easy and direct. It also has a great support structure considering it's built by the Chrome team.

PerfumeJS Perfume is far more comprehensive in terms of reporting data. It is double the size of Web Vitals (2kb) but that makes sense to me as it comes with a ton more features. It also comes with an API for sending data points to GA

Perfume not only handles Core Web Vitals, but also Web Vitals that are about to become more important (like First Paint). It also by default reports on a ton of metrics (data enrichment) that are generally hidden, unknown or overlooked, I cant list them all (there are too many) but to list a few, Perfume can help us understand and collect data for:

  1. Devices: Whether the device is low-end or high-end
  2. Network Resource: DNS, TCP, Download and Time to First Byte of network requests
  3. Resource Timing: analytics on document-dependant resources (stylesheets, fonts etc)
  4. Element Timing: we can track when image and text elements are displayed on the screen
  5. A holistic web vitals score, with far more detailed web vital metrics than web-vitals.js

In my opinion, Perfume is a far more holistic and forward thinking option for collecting data that can help improve page experience. It gives us tools that can really help us better user experience by being able to make decisions we previously couldn't, using real data. It will go along way to making our sites more accessible in developing markets and reigning in performance on devices with less resources / RAM.

Both libraries should be reviewed with the following considerations:

  1. Perfume and Web Vitals are not applicable to local development or optimisation of performance pitfalls. They exist to report field data, or to be used as RUM (Real User Measurement) tools. They are both lightweight libraries that 10up could use to easily report data and send it to a dashboard (GTM, GA, Custom, Firebase etc)
  2. Data is meaningless unless it's conveyed in a meaningful way. That means we need to look at how to convey the data recorded in a way that would be helpful to clients.
  3. Will there ever be a point where 10up looks at using these libraries to record telemetry? It's one thing we have never really considered, as far as Im aware. We could use these libraries both internally and externally - helping us pinpoint where our engineering is failing in terms of Performance.
  4. Neither of these libraries should be used for debugging or identifying performance issues. Lighthouse does a far better job of this. Whether thats used through Chrome DevTools or the CLI is up for debate.

As mentioned above, the engineering effort here is relatively minor. It's how we collate and interpret the data in GA or GTM that is going to make the difference.