Closed philsbln closed 6 months ago
This issue is currently awaiting triage.
If cloud-provider-aws contributors determine this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
This sounds identical to what kOps does, allowing IPv6 prefix support regardless of which CNI is used. We have been discussing if the kOps address controller should be moved to CCM before.
This sounds identical to what kOps does, allowing IPv6 prefix support regardless of which CNI is used. We have been discussing if the kOps address controller should be moved to CCM before.
It's not surprising as kOps and Gardner both manage cluster lifecycle and orchestrate clusters across different cloud providers and I agree https://github.com/kubernetes/kops/blob/master/cmd/kops-controller/controllers/awsipam.go looks very much like what we would need to implement if it was not available through the provider extension…
What was the reason to not move it to CCM? It would be a value proposition to a lot of people.
From my side, it's only the time it takes to do it. When it was implemented, it was simpler to get it into kops than into this project.
For my preference, i think CCM should just provide the bare minimal functionality to implement the kubernete's cloudProvider interface.
IP assignment better to be implemented as a standalone controller, which allows it to iterate independently than CCM, and allows users to supply different implementations. Is there any technically reasons what this should be in CCM than a standalone controller?
This is well within CCM's bailiwick. It's a straightforward, simple reconciliation loop copying any IPv6 prefix assignment from the cloud API to the Kubernetes Node object.
Im my opinion this also belongs into the bailiwick of the aws cloud-controller-manger. Other cloud-controller-manager like gcp have the same understanding.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen
@DockToFuture: You can't reopen an issue/PR unless you authored it or you are a collaborator.
/reopen
@philsbln: Reopened this issue.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What would you like to be added:
We would like to integrate an IP address management (IPAM) controller with the AWS cloud-controller-manager to assign POD addresses based on prefix delegations. Our primary use-case are IPv6 only clusters, but the concept would also work with some limitations for IPv4.
The desired process for assigning IPv6 addresses to pods is as follows:
IPAM controller
within thecloud-controller-manager
should picks up this prefix delegation through the cloud provider API and writes the delegated prefix in the nodesPodCIDR
attribute. This can be implemented analogous to the GCP implementation as a customCloudAllocator
.Why is this needed:
While adding IPv6 support to the gardener project, we strive to get globally unique IPv6 addresses across all our clusters. We would prefer to integrate the IPv6 IPAM functionality with the cloud provider's as much as possible as using provider managed IPv6 space also eliminates the need for NAT or routing hacks.
The functionality to use delegated prefixes as PodCIDR is already implemented in amazon-vpc-cni-k8s, which requires api keys to read/add prefix delegations to be present on the nodes. This implementation was reasonable for IPv4, where nodes needed dynamically add multiple prefix delegations in order to preserve precious address space. For IPv6, we only need a single prefix delegation and can add this one at the time we create the node, thus, eliminating the risk to expose API keys through a compromised node and the need to deploy the vpc-cni to the nodes.
/kind feature