Open jbandoro opened 8 months ago
🤖
That sounds like a great idea! Continuous performance benchmarking can definitely help in identifying performance improvements or regressions. Using github-action-benchmark and pytest-benchmark seems like a good approach. Your proposal to also benchmark the DAG parsing times for various load methods is a valuable addition. We encourage you to go ahead and create a pull request for this feature. Your contribution will be much appreciated!
After performance integration tests were added in #827 a reasonable follow-up would be to integrate a tool like github-action-benchmark to have continuous benchmarking of Cosmos performance so that possible performance improvements or regressions by comparing benchmark results can be detected.
The work would involve using pytest-benchmark to output the performance benchmark results and then storing the results with the GH action like in the example here. We could setup alerts for PRs.
A follow-up to this could also involve benchmarking the DAG parsing times for various load methods, e.g.
LoadMode.DBT_LS
andLoadMode.DBT_MANIFEST
to track performance for parsing.