limzykenneth / p5-benchmark

https://limzykenneth.github.io/p5-benchmark/
BSD 2-Clause "Simplified" License
3 stars 2 forks source link

Updating benchmarks #36

Open wong-justin opened 1 year ago

wong-justin commented 1 year ago

Hi @limzykenneth , I was thinking about contributing a little to this benchmarking repo.

Some ideas:

Or some things you listed in the README, like configuring for custom builds and benchmarking more functions.

What do you think?

This would be related to my GSoC project about shader filters, which is tied to performance. But I'm still considering other ideas to make measuring performance easier for contributors, even something simple like an explainer content page, sorta like this. I think whatever I do will be limited scope since performance isn't usually a priority, and there's only a few weeks remaining for the project.

limzykenneth commented 1 year ago

Hi @wong-justin, adding additional p5.js versions should be relatively straightforward, you can create a PR to do so if you like.

For GitHub action to automatically add new versions on release I've not looked into it recently but the main constraint will be let an event from the p5.js repo trigger an action in this repo, I'm not sure that is supported directly by GitHub actions. I would like to avoid adding this action to the main p5.js repo to reduce clutter there too. If you have some ideas around this it would be good to investigate.

One thing that would probably need to be done before those are actually to move from Travis CI to GitHub Actions, as I'm not even sure the CI benchmark is still running because of the change of terms in Travis making it near impossible to use it for free projects.

I have several ideas for things I want to do but I just couldn't find the right time for it. Feel free to pitch any ideas you have that would be relevant to what you are working on and we can work out the details.

wong-justin commented 1 year ago

I can probably make that PR within the next couple weeks for adding a new version.

I don't have much to pitch. I'll just show my two use cases for measuring performance so you have some context.

First I used this repo to show how slow filter() was for the gsoc proposal. old_filter_benchmarks So it would be nice to keep this tool updated in case somebody has a similar use case in the future. Several weeks ago I looked at how the p5.js-website repo pulls from the core library (overview here). It uses a combination of Grunt and GitHub Actions, and it doesn't clutter the main p5js repo. Maybe that's worth imitating here. I'll need a bit of learning time though because I don't have any experience using these tools.

The other time I wanted to measure performance was for a side-by-side comparison of two different builds.

https://github.com/limzykenneth/p5-benchmark/assets/28441593/210324c8-df9f-4ff4-bedd-6d3942e992d7

I set up two sketches in separate windows and used the widget linked at p5.js/test/manual-test-examples/webgl/stats.js, then recorded the screen. It was a very manual process to set up, so it would be nice to have an easier way to compare in-progress builds. Not sure the best way to go about this though - maybe by adding to this repo, or maybe with some other tool like jsperf.

I'm sure you've seen more use cases though and have better ideas. So I'm interested in hearing what's needed by others like you and seeing how I can contribute.

limzykenneth commented 1 year ago

The p5.js-website update on p5.js release is triggered from p5.js because both are owned by the "processing" organization on GitHub and we are using specific personal access token to run a git push to p5.js-website. It works but not particularly elegant, if there is an event that Github Actions can be triggered through stemming from another repo, that would be my ideal solution.

For side by side tests it is a bit out of scope for this benchmark, mainly because this benchmark (which uses benchmark.js, a bit old but no real good replacement, although others does the same thing too) aims to run test cases as fast as possible, ie. it aims to take up all available system resources where it can. Running them side by side means it cannot do so and a different kind of test architecture will probably be needed for this kind of benchmarking.