Open BobVanB opened 2 years ago
I think this issue may be resolved by a fix in v2.7.0. https://github.com/rancher/dashboard/issues/6881 Previously, the UI was changing YAML values even when the user had not made any changes, but it should no longer do that since the fix.
@bobvanb have you tried this in a newer version of Rancher? As noted, we may have fixed this in 2.7.0.
Yup, we tested it in version 2.7.6 and it is also broken in a different way. https://github.com/rancher/dashboard/issues/10330
For internal coordination, SURE-8094 is our reference.
Pushing to 2.10 since this depends on 2 blocked backend tix. Candidate for 2.9.x back port.
Still an issue in v2.6.5
https://github.com/rancher/rancher/issues/36197
[Adding context from that ticket in case things get lost. For engineering, this should be easy to replicate.]
Rancher Server Setup
Information about the Cluster
rancher_cluster/resource definition:
User Information
Default rancher container, nothing special.
Describe the bug
Editing a cluster with
edit as yaml
in the rancher uicluster management
, will add defaults to theencryption.yaml
. This will lead to akube-apiserver
that will not continue to start and a cluster state that ends up with a error.To Reproduce
Start rancher
docker run -d --rm -p 443:443 --privileged --name rancher "rancher/rancher:v2.6.2
Create api token for admin.
Set the hostname to http://rancher in the global settings.
We create a cluster through the api and register a encryption provider. This will create a encryption.yaml without aesgcm, kms or secretbox You can probably get the same result to create a cluster through the ui and go to step 7.
Register a machine with all roles
docker run --privileged -d --name test-cluster --link rancher:rancher docker:dind
Take note of the local EncryptionConfiguration on the machine.
/etc/kubernetes/ssl/encryption.yaml
Open the edit as yaml on the cluster.
Hit Safe, with no changes.
Wait for the new encryption config.
Result
EncryptionConfiguration with default providers that are empty. Rancher/Kubernetes does not like this and goes into an error state.
Expected Result
The kube-apiservers are restarted without errors.
Additional context
Workaround 1 to fix it:
Workaround 2 to fix it:
kubectl