Closed m-messiah closed 2 years ago
Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all
The PR is currently a draft and waits for further integration tests for the new feature
I am trying to understand this feature. In the original design, we intend to make clusterversionName field immutable in VC CRD. Are you targeting a user case where user changes the clusterversionName? or you want to handle cases where the clusterversion CR itself is updated and you want to populate the change to all the running VCs?
I am trying to understand this feature. In the original design, we intend to make clusterversionName field immutable in VC CRD. Are you targeting a user case where user changes the clusterversionName? or you want to handle cases where the clusterversion CR itself is updated and you want to populate the change to all the running VCs?
The second. I want (as a cluster operator) to release new minor versions of clusterversion by adding/updating apiserver flags, updating minor versions of apiserver or controller-manager or manipulate the service params according to the normal process of software upgrades or sudden security patches. So, in this feature we trying to implement a way to populate the current clusterVersion (CR) state to virtual clusters that use that version and agreed to be upgraded (by setting a label).
In foreseeable future it is intended to be used for minor upgrades only, keeping different clusterVersion CRs for different major k8s versions.
Just now, it is already useful to change log verbosity or enable/disable some authorisation params, which anyway "are managed from outside"
I am trying to understand this feature. In the original design, we intend to make clusterversionName field immutable in VC CRD. Are you targeting a user case where user changes the clusterversionName? or you want to handle cases where the clusterversion CR itself is updated and you want to populate the change to all the running VCs?
The second. I want (as a cluster operator) to release new minor versions of clusterversion by adding/updating apiserver flags, updating minor versions of apiserver or controller-manager or manipulate the service params according to the normal process of software upgrades or sudden security patches. So, in this feature we trying to implement a way to populate the current clusterVersion (CR) state to virtual clusters that use that version and agreed to be upgraded (by setting a label).
In foreseeable future it is intended to be used for minor upgrades only, keeping different clusterVersion CRs for different major k8s versions.
Just now, it is already useful to change log verbosity or enable/disable some authorisation params, which anyway "are managed from outside"
Thanks. Just FYI, we are trying to resolve the problems you mentioned using CAPN. We didn't handle vc update in the native provisioner because it was designed for PoC purpose, and we don't recommend using it in production. We want to make it simple. As you may see, the etcd Pod does not even use a persistent storage there. I am glad you are looking into the update problem, but I would suggest enhancing CPAN provider.
@Fei-Guo Per our chats in Slack, this is more of stop-gap while we haven't been able to move over to CAPN, I think it's worthwhile in the interim to still move this forward since it doesn't hurt anything necessarily. We've been able to overcome a lot of those challenges by maintaining our own custom ClusterVersions with persistence for example and we're trying to keep etcd upgrades out of the scope for the vc-manager.
Do you still object to this development?
@Fei-Guo Per our chats in Slack, this is more of stop-gap while we haven't been able to move over to CAPN, I think it's worthwhile in the interim to still move this forward since it doesn't hurt anything necessarily. We've been able to overcome a lot of those challenges by maintaining our own custom ClusterVersions with persistence for example and we're trying to keep etcd upgrades out of the scope for the vc-manager.
Do you still object to this development?
Thanks for the clarification. I have no problem with enhancing the clusterversion based vc lifecycle management, but I hope the feature scope should be very clear. As we discussed offline, we may name the feature more specifically, something like "ClusterVersionPartialUpdate" or a better name.
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: christopherhein, m-messiah
The full list of commands accepted by this bot can be found here.
The pull request process is described here
/lgtm
Also, please add a document to describe the workflow in tutorial. Based on my understanding, the workflow of triggering update is 1) Change the clusterversion cr 2) Add a label in the VC cr which is not obvious for many users.
What this PR does / why we need it: The PR adds the featureGate
ClusterVersionPartialUpgrade
to allow native provide to reconcile currently running virtual clusters to re-apply clusterversion specs to control plane.Changes to the default behaviour or code
The PR introduces changes to the current native provisioner (Aliyun just returns
not implemented
for the feature):Features of the featureGate