Open DaGeRe opened 3 years ago
This comment has been imported from an old repository. It was originally created by @DaGeRe at 2021-03-11T09:56:39Z
While that is a good goal in general, its implementation is complicated.
It would be possible to give Peass a target time, and Peass repeats VMs until the time is over. Then, it could be checked based on the significance whether there is a performance change.
Unfortunately, this simple approach has some drawbacks:
We should first integrate the measurement (of at least a part of the unit tests) into your CI process with an example configuration and afterwards check again whether it is possible to ease the configuration like this.
It is fairly hard to choose a proper configuration that will a) not take too long to finish b) show small enough performance differences
Is it possible to let the tool figure out the appropriate configuration by giving it a target time and significance? As a start the tool might make bad assumptions but would offer this configuration optionally in the interface.