Closed raz-bn closed 1 year ago
I wonder if there is any reason for creating the user-data-{NAMESPACE}-{HASH}-userdata
secret since it is basically identical to the user-data-{NAMESPACE}-{HASH}
secret. why not just mounting the existing secret to VMs?
The secret is duplicated because when new KubevirtMachine
is created it look for userdata
data key inside this secret, while the original secret is created with the data key: value
Its look like the solution should be not skipping the secret creation code if the duplication secret is already exists, but need to update the secret if the secret data was changed since the last time. here is the location for the suggested change: https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt/blob/48146e1c41d730c13e159b11d147c9f0f42361e6/controllers/kubevirtmachine_controller.go#L572
Its look like the solution should be not skipping the secret creation code if the duplication secret is already exists, but need to update the secret if the secret data was changed since the last time.
I think that's the simplest solution for now.
Long term i want us to get out of the business of creating that separate secret for the VM/VMIs entirely. That requires deprecating the ssh and capk user support for the cloud-config types though.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
When a new VM is provisioned, it pulls its authorization token from the
user-data-{NAMESPACE}-{HASH}-userdata
secret. The creation of this secret is happening in the kubevirtmachine controller , the source of the data contained in the secret comes from theuser-data-{NAMESPACE}-{HASH}
secret, which is created by the nodepool controller in HyperShift, the secret is referenced in the Machine object under the field spec.bootstrap.dataSecretName. The nodepool controller rotates the authorization token every 24H (inside theuser-data-{NAMESPACE}-{HASH}
secret), but there is no mechanized for rotating theuser-data-{NAMESPACE}-{HASH}-userdata
which means new VMs can not pull their ignition file. The only way to workaround it is to delete theuser-data-{NAMESPACE}-{HASH}-userdata
secret./kind bug