OceanParcels / Parcels

Main code for Parcels (Probably A Really Computationally Efficient Lagrangian Simulator)
https://www.oceanparcels.org
MIT License
290 stars 133 forks source link

Benchmarking Suite #1712

Open VeckoTheGecko opened 4 weeks ago

VeckoTheGecko commented 4 weeks ago

Establishing and documenting a standard benchmark suite for Parcels would bring an active focus to performance within the Parcels project.

This benchmarking suite could include whole simulation tests, as well as tests that target specific parts of the codebase (e.g., particle file writing, as would be important in #1661). Note that tests relating to MPI should be realistic whole simulation tests as the loading of fieldsets and locations of particles have a significant impact on performance.

Ideally:

Have a suite of tests that can run on CI (when requested via a change in PR label) testing various core parts of the codebase, saving and uploading a waterfall report for the execution time and memory use for each function (such as those generated by sciagraph as well as IO.

Known tools:

The benchmarks should be run on a machine with consistent resources (large simulations can be run on Lorenz at IMAU which has significant resources and access to hydrodynamic forcing for realistic simulations).

Related:

VeckoTheGecko commented 1 week ago

The more I look at projects in this domain, the more I see asv being used to create benchmarks and compare them across the Git history for a project. I propose that we use asv for creating benchmarks of notable functions and classes so that we can track their performance. Whole simulation benchmarks will likely need to be done in another way (perhaps using an offline approach).