JuliaParallel / Dagger.jl

A framework for out-of-core and parallel execution
Other
621 stars 67 forks source link

Add automated benchmarks, stress testing, and other analyses #457

Open jpsamaroo opened 9 months ago

jpsamaroo commented 9 months ago

As Dagger is a very complicated set of interacting components and APIs, it would be very useful to be able to track Dagger's performance, scalability, and latency over time to ensure that we don't introduce unexpected regressions, and to be able to make claims about performance and suitability with some confidence.

To that end, I believe it would be valuable to, on every merge to master:

To make the collected information useful, we should automatically export the associated data to some persistent storage (say, S3) in raw form, together with any generated plots or aggregate metrics. We can use something like https://github.com/SciML/SciMLBenchmarks.jl/blob/84462b8f1e5c974df9f396ca4d9b4900e1108a21/.buildkite/run_benchmark.yml to upload to S3, and then provide a script or code to download and analyze this data.

An extra bonus would be to publish this data to https://daggerjl.ai/ so that we can show off our performance gains over time.