Closed peterp closed 1 month ago
hi @peterp, can I have more information about this issue? I'm wondering if I can help with this.
Hi @renansoares! Thanks for jumping in here. I don't know if we actually know more about this. Overall we'd like to be able to do the following:
But, to be honest, we're not tracking or pay much attention to this a lot of the time. So unclear about 1) how much time this would take compared to 2) the value it would add.
Currently, we've just been doing this occasionally using projects like the Example Blog deployed on Vercel and Netlify.
Do you have any experience here?
Hi @thedavidprice, thanks for your response.
I have some experience with Lighthouse and I am looking for something to help next.
I see 3 ways to analyze web core vitals metrics:
I'm thinking about how we could use these libraries to help users to measure performance, and make Redwood collaborators aware of the framework performance.
Let's discuss it. I'm happy to contribute if I can 🎉
Wow, lots of great ideas here. And a "yes" to moving something forward. I'll do my best to identify priority, actionable next steps. But also know I can become a bit of a bottleneck as I'm inconsistent in my focus outside Redwood v1 roadmap priorities. That said, if you feel momentum is slipping, there's nothing that gets momentum flowing again like a draft PR 😆 No permission needed.
--> What would be our goal for Lighthouse measuring + monitoring?
I'd suggest these:
What I don't want to do is fall into the rabbit-hole of chasing Lighthouse 100s and/or making Lighthouse scores the focus of performance. I think it's a very informative and useful tool. I don't think it should ever become "the point".
This needs more discussion. So what say you all?
I like this idea a lot and suggest it be the first step in this project. If you agree, how about creating an RFC Issue with an initial outline of what could become redwoodjs.com/docs/performance
. There's a lot more to come (and do) by way of performance, which includes the use of Prerender that could be referenced (currently it's own stand-alone doc). But what about SEO, OG data, caching, end-user monitoring and analytics, etc. Definitely not saying we start with all these — just adding ideas to what a Performance doc could become.
For now, I think referencing something like web-vitals
(which looks really interesting) with usage instructions in the Performance doc would be the right first start. I don't think we'd be ready to add that to the codebase.
This is a very interesting idea and could possibly be implemented in two ways:
Take a look at our current GitHub Action E2E runner and the corresponding Cypress E2E here.
It spins up a new project installation and runs through steps in the tutorial. Given some code mods, do you think we could also include some kind of benchmarking + output for Lighthouse against this project? The advantage is that this runs against every PR.
Eventually, we'd like to be monitoring an assortment of performance-related stats with each PR. Take a look at how Next.js does it via the example here. Maybe Lighthouse could be the beginning of this kind of automated performance benchmarking for Redwood?
I haven't even begun this project yet, but eventually, I'd like to do something like the following:
yarn rw setup deploy ...
commandIt's a big project and this is the first time I've sketched out what's in my head. No expectations. But hoping it'll get some feedback and, especially, suggestions about how to improve the overall design + figure out possible next steps.
You just split up all the topics beautifully. I have some comments about each topic.
I agree with all points you have mentioned.
The idea is to have benchmarks reflecting how people are using Redwood. Having those benchmarks, the community could refer to as a source to make architectural decisions according to each product's requirements. Also, it helps to track performance degradation.
'Performance' documentation
I agree. I will start an RFC for redwoodjs.com/docs/performance
. A good start point might be introducing Lighthouse and the Web Vitals library and refer to the prerendering functionality. There are many (perceived) performance subjects (such as virtualization, pagination, animations, examples of common UI performance bottlenecks) that could be added along the way.
In my experience using Lighthouse, it helped me to improve even SEO and caching. I had the reports telling me that my page was missing SEO tags, that I should enhance accessibility tags, and the API caching policy was not well configurated. So, Lighthouse includes many of these concepts that you could start from and dig in to understand better.
Current CI
Lighthouse can be added like one of those runners that the repository currently has. I'm thinking about the benefits of having Lighthouse running in every PR in the main repository. In my opinion, we will be most interested in the Web Vitals Metrics that Lighthouse gives us. How useful these metrics in every PR is an open question for me. It helps me in my job because my team is aware that it exists, can track and see it in every PR, maybe the same?
This is great; I like so much this idea of having all of those benchmarks. Those benchmarks should monitor essential things like all you have mentioned and give us important metrics.
This test deployment can also highlight the importance of some features like prerendering that reduces the Largest Contentful Paint (LCP).
Would it be helpful to reflect how users are currently using the framework? Maybe it can guide them in the decision-making process? Like if we have more than one fetch, the performance in those scenarios.
Relevant to site performance and search performance (not sure where this fits with Lighthouse):
Helpful overview article from Vercel blog: https://vercel.com/blog/core-web-vitals
I tested this out and added a basic lighthouse ci step to the CI workflow. To take it further and have the data persist so that comparisons can be made between commits or the trend between releases would require a dedicated ci server. This server can then accept the lighthouse results and provides the interface.
@thedavidprice Is this something we would still be interested in the near term given that requirement?
https://developers.google.com/web/tools/lighthouse