The idea is to follow-up on some sensitive tests that would be benchmarked. Their overall timing exported and compared with previous runs of the CI.
This could allow us to be aware of changes in code performance after changes/upgrades.
For example: listing services takes some time, changing code might have an impact, and this would allow us to "see" in a graph how much impact the change has.
Following this article: https://towardsdatascience.com/benchmarking-pytest-with-cicd-using-github-action-17af32b4a30b
The idea is to follow-up on some sensitive tests that would be benchmarked. Their overall timing exported and compared with previous runs of the CI. This could allow us to be aware of changes in code performance after changes/upgrades.
For example: listing services takes some time, changing code might have an impact, and this would allow us to "see" in a graph how much impact the change has.