Currently, when a new microbenchmark is added on master it will not be part of of the weekly run until the microbenchmark is on both a release version and master. The reason for this is the weekly job only compares microbenchmarks available on both revisions it runs a comparison against (usually master vs. the latest patch release). If a microbenchmark is missing from either set it will not be published to the Google Sheets and dashboard.
This is not ideal, because a new microbenchmark does not start adding value until a release containing it has been done. One way to get more value out of a new microbenchmark would be to use its introduction revision as its baseline. This is however not trivial as the job runs two specific revisions for all microbenchmarks, which requires building the binaries and running it in a distributed fashion.
Ons solution would be to use a sort of cache, which keeps around the data for a first run of a microbenchmark on master. Should the microbenchmark become available via a release, the cache is dropped. The weekly would do comparisons for new microbenchmarks separately using the provided cache. In essence we would have two runs, one for established microbenchmarks to detect regressions, and one for unestablished microbenchmarks that have not formed part of a release yet.
Currently, when a new microbenchmark is added on master it will not be part of of the weekly run until the microbenchmark is on both a release version and master. The reason for this is the weekly job only compares microbenchmarks available on both revisions it runs a comparison against (usually master vs. the latest patch release). If a microbenchmark is missing from either set it will not be published to the Google Sheets and dashboard.
This is not ideal, because a new microbenchmark does not start adding value until a release containing it has been done. One way to get more value out of a new microbenchmark would be to use its introduction revision as its baseline. This is however not trivial as the job runs two specific revisions for all microbenchmarks, which requires building the binaries and running it in a distributed fashion.
Ons solution would be to use a sort of cache, which keeps around the data for a first run of a microbenchmark on master. Should the microbenchmark become available via a release, the cache is dropped. The weekly would do comparisons for new microbenchmarks separately using the provided cache. In essence we would have two runs, one for established microbenchmarks to detect regressions, and one for unestablished microbenchmarks that have not formed part of a release yet.
Jira issue: CRDB-44686