Open paulf81 opened 1 month ago
This is great!! On my end, I would love to see performance on, let's say, AEP calculation for a 100-turbine farm. Would also be nice to see this for different wake model set-ups, in case modifications are made to specific submodels. Benchmarking parallel floris would be a nice plus.
This is great!! On my end, I would love to see performance on, let's say, AEP calculation for a 100-turbine farm. Would also be nice to see this for different wake model set-ups, in case modifications are made to specific submodels. Benchmarking parallel floris would be a nice plus.
Sounds good @Bartdoekemeijer ! Added a short todo-list above to track the intention
It would be good to add a scaling test, as well. This should be a test of something like 10, 100, 1000 turbines across a meaningful number of conditions (~1000).
Thank you for your comments @rafmudaf ! I'll hope to take another pass at this soon and incorporate your suggestions
Add automatic benchmarking to FLORIS
This draft PR is meant to add automatic code benchmarking to FLORIS. Proposed solution is to use pytest-benchmark to implement set timing tests:
https://pytest-benchmark.readthedocs.io/en/latest/
https://github.com/ionelmc/pytest-benchmark
And then try to schedule some semi-daily execution of these tests with logged performance checks so we can track changes over time. Here focused on:
https://github.com/benchmark-action/github-action-benchmark
To this end I added a first test to the tests folder including benchmarking to the
tests/
folder and confirm the command line:pytest floris_benchmark_test.py
Produces a benchmark result. At this point might open up for discussion, or others research:
To include: