Open alexfauquette opened 3 months ago
How do you envision it for us to get reliable data? Do we run it multiple times? Do we run it once and run it manually if we want to ensure?
I was considering something like run the experiment 5 times and pick the median to avoid weird data point
About the triggering, having it run in each commit seems too much. we could have a manual trigger. And run it on the main branch commits to get the evolution of the package
The reason behind a manual trigger is that the data grid had a bot showing perf impact in each PR, and I nearly never read it
Might be interesting to explore a tool such as https://codspeed.io/
The idea would be to add a test that measure performances of the charts when a PR is open to avoid degradation of this metrics since it's easier to hurt performances than to restore them
How to measure
I assume the best might be an end to end test with playwright to be in real world condition
What should be measure
Search keywords: