Open christianang opened 2 years ago
We discussed this in the 20th July office hours and I'm reporting my take from the discussion (feel free to discuss)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
It is not clear how to address this use case and if there are contributors willing to invest in this effort. but let's keep this around at least till the bot closes it /triage accepted
/help
@fabriziopandini: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
(doing some cleanup on old issues without updates) /close unfortunately, no one is picking up the task. the thread will remain available for future reference
@fabriziopandini: Closing this issue.
I was also trying to change podCIDR on a cluster, tried to manipulate directly the control-plane config via:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: cluster-name
spec:
clusterConfiguration:
networking:
podSubnet: '10.64.0.0/16,100.64.0.0/16'
serviceSubnet: '10.254.0.0/16'
[...]
Got validation webhook deny:
one or more objects failed to apply, reason: admission webhook "validation.kubeadmcontrolplane.controlplane.cluster.x-k8s.io" denied the request: KubeadmControlPlane.controlplane.cluster.x-k8s.io "cluster-name" is invalid:
[spec.kubeadmConfigSpec.clusterConfiguration.networking.podSubnet: Forbidden: cannot be modified,
spec.kubeadmConfigSpec.clusterConfiguration.networking.serviceSubnet: Forbidden: cannot be modified]
I wonder if as first step we should allow this change?
I'd like to be able to add the cidr range for an additional IP family to spec.clusterNetwork.pods.cidrBlocks and spec.clusterNetwork.services.cidrBlocks e.g If I have a single stack IPv4 cluster with IPv4 cidr ranges I'd like to be able to add IPv6 cidr ranges to the cidrBlocks to upgrade my cluster to a dual-stack cluster.
I can give a try on this if someone can guide where to find this logic on the codebase :D
Note: While changing podCIDR should be trivial, changing spec.clusterNetwork.services.cidrBlocks
maybe also require regenerate apiserver Certificates. Leaving it here for discussion.
/reopen
(Just to signal discussion is still ongoing on this topic)
@killianmuldoon: You can't reopen an issue/PR unless you authored it or you are a collaborator.
@killianmuldoon: You can't reopen an issue/PR unless you authored it or you are a collaborator.
@killianmuldoon: You can't reopen an issue/PR unless you authored it or you are a collaborator.
@killianmuldoon: You can't reopen an issue/PR unless you authored it or you are a collaborator.
@killianmuldoon: You can't reopen an issue/PR unless you authored it or you are a collaborator.
@killianmuldoon: You can't reopen an issue/PR unless you authored it or you are a collaborator.
@killianmuldoon: You can't reopen an issue/PR unless you authored it or you are a collaborator.
/priority backlog
User Story
As an operator I would like to update the cluster network configuration for an existing Cluster resource for upgrading a single stack Cluster to a dual-stack Cluster.
Detailed Description
I'd like to be able to add the cidr range for an additional IP family to
spec.clusterNetwork.pods.cidrBlocks
andspec.clusterNetwork.services.cidrBlocks
e.g If I have a single stack IPv4 cluster with IPv4 cidr ranges I'd like to be able to add IPv6 cidr ranges to thecidrBlocks
to upgrade my cluster to a dual-stack cluster.Updating the
cidrBlocks
on aCluster
needs to be able to reconcile theKubeadmControlPlane
with the additional CIDRs. Right now it seems like theKubeadmControlPlane
disallows changes to theclusterConfiguation.networking.{serviceSubnet,podSubnet}
and updating thecidrBlocks
on aCluster
also does not affect theKubeadmControlPlane
.Anything else you would like to add:
/kind feature