nodejs / performance

Node.js team focusing on performance
MIT License
377 stars 7 forks source link

Discover downgrade performance trends over the releases #13

Open UlisesGascon opened 1 year ago

UlisesGascon commented 1 year ago

I was wondering if in the past we have run "full" benchmarking tests against all the TLS version and if that results were stored. Maybe we can discover downgrade performance trends over the time if we comprare different releases.

This was aligned (a bit) with the baseline idea proposed by @RafaelGSS, but extending the comparison to more releases (not just the last one)

mcollina commented 1 year ago

We used to have this, but we folded the team due to lack of contributions.

sheplu commented 1 year ago

It would be very interesting indeed to have a set of benchmarks being run every weeks (?) and keep the value to have some charts or visual. Like you said it would help us visualize performance issues but also demonstrate optimization

joyeecheung commented 1 year ago

We used to have it at https://github.com/nodejs/benchmarking/tree/master/benchmarks, I have no idea where the data went though (I think up to v12.x?) cc @mhdawson

mhdawson commented 1 year ago

We used to publish benchmarking data to https://nodejs.org/benchmarking but as mentioned that was discontinued when as the benchmarking team peetered out.

There have also been a number of discussions around how to use the microbenchmarks to track performance between versions. The challenge was always that a full run takes a very long time (days) and then you have a large set of numbers to compare/validate. The goal was to try to figure out a subset that made sense/was valuable but we did not managed to figure that out. Instead the benchmarks that were run like acme air were inteted to be more real-world measurements and were faster turn run.

I do think tracking performance between versions would be valuable if we can line up people to contribute/review the results.

wa-Nadoo commented 1 year ago

There is a project with the similar goal, https://github.com/mscdex/nodebench. Maybe it can be used as a starting point for the implementation.

UlisesGascon commented 1 year ago

Thanks for the feedback and the historical context of the benchmarking and microbenchmarks. I agree with @mhdawson that maybe focusing on concrete microbenchmarks makes total sense, especially if we want to avoid long feedback loops.

Thanks @wa-Nadoo for the suggestion, the tool seems fantastic. I love the way the UI works, but there are some limitations like for fs due to the machine used to run the tests. I attach a screenshot as a reference 🙂

screencapture-mscdex-github-io-nodebench-2022-11-26-08_28_20

anonrig commented 1 year ago

I think the work done by @RafaelGSS solves this issue.

tniessen commented 1 year ago

@anonrig I can't seem to find any context on your comment. What work are you referring to exactly?

anonrig commented 1 year ago

@tniessen I don't quite remember but @RafaelGSS did a fantastic job on https://github.com/RafaelGSS/nodejs-bench-operations, and working on generating reports & graphs at that time.