Open NymanRobin opened 1 month ago
There seems to already be some kind of check-prow-config job https://prow.apps.test.metal3.io/view/s3/prow-logs/pr-logs/pull/metal3-io_project-infra/821/check-prow-config/1815332764979826688
Maybe this can be used to block PR's until the config is correct, but needs to be double checked if this works as expected :thinking:
Check prow config just validates the config is syntactically correct, and won't explode Prow when deployed. It does nothing (or little at max) to address the config otherwise.
I do agree wholeheartdly that PR merging -> config deployment should be automated, and not independent operations. We may not need a test cluster to deploy as if properly automated, we can just revert the config and manually merge that to restore cluster, but up to discussion if we need canary cluster.
/triage accepted
Current Situation
Currently there is no clear instructions to when or how to update the Prow cluster (besides a small not in the prow README
Apply the changes and then create a PR with the changes.).
However this can lead to scenarios when the actual configuration in the repository and the live cluster diverges. In scenarios such as two persons working with the cluster at the same time and overwriting each others work. Also recently seen scenario when image bumps there was no clear process, leaving one PR hanging and the main diverged from live clusterPotential Solution
What would be beneficial is a process so all updates are handled in one way and also some automation to support this. Some ideas for the automation could be automatically applying changes this of course have the risk of a bad change breaking the automation itself. Another approach would be to simply checking the diff of the live cluster vs a PR and only allow for merge when the PR changes can be found in the cluster or have a periodic job that alerts in case there is a diff between main and the live cluster