Open sedefsavas opened 3 years ago
@sedefsavas: The provided milestone is not valid for this repository. Milestones in this repository: [Next
, v0.6.x
, v0.7.0
, v0.7.x
]
Use /milestone clear
to clear the milestone.
/milestone Next
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
/reopen /lifecycle frozen
@richardcase: Reopened this issue.
/triage accepted
/assign @randomvariable
Reminder: Create issue for dual-stack and replicate this for EKS
/priority important-soon
This gets messy with EKS.
Support should be fixed in Cluster API, not by shoving lots of non-standard behaviour into CAPA - consistency across providers is paramount. We need to break this down. Please start a doc, we have a team at VMware looking into dual-stack across Kubernetes in general who can assist.
@randomvariable - started a doc
/assign
for initial scoping
Are there any updates here?
/unassign randomvariable /help
@richardcase: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
Are there any updates here?
@yandooo - i think this issue has stalled. So its open to be worked on. I have marked it as help wanted as well.
@richardcase would it be possible to share relevant documentation for the topic if any? As per comment there seems like some progress made towards standardizing CAPI to support dual-stack. Thanks
There isn't a huge amount of documentation. The doc mentioned in the thread was the start of a doc to capture notes but we didn't much further. Do you have access to that doc?
@richardcase don't have it. I requested access linking your comment in the request message. Appreciated it if it can be shared. Thanks
@richardcase don't have it. I requested access linking your comment in the request message. Appreciated it if it can be shared. Thanks
Just sent you an email @yandooo :smile:
We might want to address IPv6 and dual stack as separate problems as former could be achieved in short term.
I tend to agree on getting IPv6 out first to unlock the AWS EKS ipv6 feature and look at dual-stack cohesively later @sedefsavas.
I also think that adding IPv6 first is a good way forward.
/assign /assign Skarlso
@richardcase: GitHub didn't allow me to assign the following users: Skarlso.
Note that only kubernetes-sigs members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
@Skarlso - this was the issue i was thinking of.
Hi! :)
Let's tackle this! :) We have some experience creating the necessary resources in eksctl, so hopefully, I can be of assistance here.
/assign Skarlso
I would love to collaborate on this too! I worked on planning IPv6 for eksctl and implementing it alongside @Skarlso. :) See: https://github.com/weaveworks/eksctl/issues/4255
Awesome, thanks :+1:
/assign nikimanoledaki
I came across this old PR for IPv6 support: https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/1322 https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/1370
Might be useful.
Also CAPZ IPv6 support implementation: https://github.com/kubernetes-sigs/cluster-api-provider-azure/pull/646
Interesting note:
var subnets []*net.IPNet
for i := 0; i < numSubnets; i++ {
ip4 := parent.IP.To4()
if ip4 == nil {
return nil, errors.Errorf("unexpected IP address type: %s", parent)
}
n := binary.BigEndian.Uint32(ip4)
n += uint32(i) << uint(32-modifiedNetworkLen)
subnetIP := make(net.IP, len(ip4))
binary.BigEndian.PutUint32(subnetIP, n)
subnets = append(subnets, &net.IPNet{
IP: subnetIP,
Mask: net.CIDRMask(modifiedNetworkLen, 32),
})
}
Why did kops assign ip in the loop over and over? 🤔 It should be same thing always since it's never updated, right?:D
Thank you @sedefsavas! Good point of reference. :) I started writing the proposal and have some notes in there about the changes needed. I will coordinate with the rest of the team about questions.
Otherwise, I have a pretty good idea on how to proceed and hopefully will have something to look at over the next couple weeks. :)
Otherwise, I have a pretty good idea on how to proceed and hopefully will have something to look at over the next couple weeks. :)
@Skarlso - when you feel it's a good time it would be great to go through the proposal in the office hours.
Will do! I literally wanted to comment just now. :D
The proposal is a bit in WIP with lots of random comments on what needs to be changed. I successfully wrote some code last night for subnet splitting using ipv6 which I'm still testing. :) I think, for me at least :D, that was the hardest part. :D
The rest, hopefully, will just be moving stuff around and doing the right routing. :)
Exciting times. I can't wait to see the ipv6 subnet splitting :smile:
/remove-lifecycle frozen
/milestone v1.6.0
Can't wait for this to go through :)
Me neither. :D Not much longer now... :))))
IPv6 for EKS has been merged.
Outstanding work is for Unmanaged clusters.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the PR is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This issue is labeled with priority/important-soon
but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
/triage accepted
(org members only)/priority important-longterm
or /priority backlog
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
/triage accepted /priority important-longterm
This issue is labeled with priority/important-soon
but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
/triage accepted
(org members only)/priority important-longterm
or /priority backlog
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
Add support to create IPv6 clusters and add e2e test for it.
/kind feature /milestone next