kubernetes / enhancements

Enhancements tracking repo for Kubernetes
Apache License 2.0
3.37k stars 1.45k forks source link

Add IPv4/IPv6 dual-stack support #563

Closed leblancd closed 2 years ago

leblancd commented 6 years ago

Feature Description

Corresponding kubernetes/kubernetes Issue: https://github.com/kubernetes/kubernetes/issues/62822

leblancd commented 6 years ago

Cross Reference with kubernetes/kubernetes: Issue #62822

justaugustus commented 6 years ago

Thanks for the update!

/assign @leblancd /kind feature /sig network /milestone 1.11

idvoretskyi commented 6 years ago

@leblancd any design document available?

/cc @thockin @dcbw @luxas @kubernetes/sig-network-feature-requests

leblancd commented 6 years ago

@idvoretskyi - No design doc yet, but we'll start collaborating on one shortly.

sb1975 commented 6 years ago

Does this mean Kubernetes Ingress will support Dual-Stack ? Does this mean CNI ( Calico) would need to run Dual stack ( both BIRD and BIRD6 daemons for example) ?

leblancd commented 6 years ago

@sb1975 - Regarding dual-stack ingress support, that's something we'll need to hash out, but here are my preliminary thoughts:

sb1975 commented 6 years ago

@leblancd : So here is the scenario :

  1. Lets say we will use NGINX ingress controller
  2. I am exposing my services via Ingress.
  3. I am running my pods configured on dual-stack
  4. I am trying to reach the service remotely using A and AAAA dns-records. Hope all of these
  5. In summary : I want to connect to pod interfaces using either IPv4 or IPv6 addresses, as resolved by my own queries for A and/or AAAA records for the pod service name. Can I get involved in this initiative to test,documentation,architecture: but need some guidance. How do I get to know about the progress of this please.
leblancd commented 6 years ago

@sb1975 - Good question re. the NGINX ingress controller with dual-stack. I'm not an expert on the NGINX ingress controller (maybe someone more familiar can jump in), but here's how I would see the work flow:

As for helping and getting involved, this would be greatly appreciated! We're about to start working in earnest on dual-stack (it's been a little delayed by the work in getting CI working for IPv6-only). I'm hoping to come out with an outline for a spec (Google Doc or KEPs WIP doc) soon, and would be looking for help in reviewing, and maybe writing some sections. We'll also DEFINITELY need help with official documentation (beyond the design spec), and with defining and implementing dual-stack E2E tests. Some of the areas which I'm still a bit sketchy on for the design include:

We're also considering an intermediate "dual-stack at the edge" (with IPv6-only inside the cluster) approach, where access from outside the cluster to K8s services would be dual-stack, but this would be mapped (e.g. via NGINX ingress controller) to IPv6-only endpoints inside the cluster (or use stateless NAT46). Pods and services in the cluster would need to be all IPv6, but the big advantage would be that dual-stack external access would be available much more quickly from a time-to-market perspective.

caseydavenport commented 6 years ago

/milestone 1.12

justaugustus commented 6 years ago

@leblancd / @caseydavenport - I'm noticing a lot of discussion here and a milestone change. Should this be pulled from the 1.11 milestone?

leblancd commented 6 years ago

@justaugustus - Yes, this should be moved to 1.12. Do I need to delete a row in the release spreadsheet, or is there anything I need to do to get this changed?

justaugustus commented 6 years ago

@leblancd I've got it covered. Thanks for following up! :)

justaugustus commented 6 years ago

@leblancd @kubernetes/sig-network-feature-requests --

This feature was removed from the previous milestone, so we'd like to check in and see if there are any plans for this in Kubernetes 1.12.

If so, please ensure that this issue is up-to-date with ALL of the following information:

Set the following:

Please note that the Features Freeze is July 31st, after which any incomplete Feature issues will require an Exception request to be accepted into the milestone.

In addition, please be aware of the following relevant deadlines:

Please make sure all PRs for features have relevant release notes included as well.

Happy shipping!

/cc @justaugustus @kacole2 @robertsandoval @rajendar38

justaugustus commented 6 years ago

@leblancd -- Feature Freeze is today. Are you planning on graduating this to Beta in Kubernetes 1.12? If so, can you make sure everything is up-to-date, so I can include it on the 1.12 Feature tracking spreadsheet?

leblancd commented 6 years ago

Hi @justaugustus - Beta status will need to slip into Kubernetes 1.13. We are making (albeit slow) progress on the design KEP (https://github.com/kubernetes/community/pull/2254), and we're getting close to re-engaging with the CI test PR, but the Kubernetes 1.12 target was a bit too optimistic.

I'll update the description/summary above with the information you requested earlier. Thank you for your patience.

justaugustus commented 6 years ago

/remove-stage alpha /stage beta

justaugustus commented 6 years ago

No worries, @leblancd. Thanks for the update!

navjotsingh83 commented 6 years ago

Hi, @justaugustus @leblancd

I just read the update that the beta is moved to 1.13 for dual stack. What is the expected release date of 1.13? We are actually looking for dual stack support. Its a go-nogo decision for our product to move to containers.

leblancd commented 6 years ago

@navjotsingh83 - I don't think the release date for Kubernetes 1.13 has been solidified. I don't see 1.13 listed in the Kubernetes releases documentation.

AishSundar commented 5 years ago

@navjotsingh83 @leblancd 1.13 release schedule is published. Its a short release cycle with Code freeze on Nov 15th. Do you think its enough time to graduate this feature to Beta. Can you plz update this issue with your level of confidence, whats pending in terms of code, test and docs completion?

AishSundar commented 5 years ago

As per discussion in the SIG Network meeting, though there will considerable work done on this feature in 1.13, it is not expected to go to Beta in 1.13. removing milestone accordingly.

/milestone clear

AishSundar commented 5 years ago

@kacole2 to remove this from 1.13 enhancements spreadsheet

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

caseydavenport commented 5 years ago

/remove-lifecycle stale

claurence commented 5 years ago

@leblancd Hello - I’m the enhancement’s lead for 1.14 and I’m checking in on this issue to see what work (if any) is being planned for the 1.14 release. Enhancements freeze is Jan 29th and I want to remind that all enhancements must have a KEP

KevinAtDesignworx commented 5 years ago

@leblancd Wanted to follow up on your prior comment relative to creating a delineation at the edge of the cluster for IPv4/IPv6:

“We're also considering an intermediate "dual-stack at the edge" (with IPv6-only inside the cluster) approach, where access from outside the cluster to K8s services would be dual-stack, but this would be mapped (e.g. via NGINX ingress controller) to IPv6-only endpoints inside the cluster (or use stateless NAT46). Pods and services in the cluster would need to be all IPv6, but the big advantage would be that dual-stack external access would be available much more quickly from a time-to-market perspective.”

This use case would be a good one for a current project, so wanted to see your thoughts around timeframe, see if there was anything myself or someone in our group could contribute to help out with this quicker time to market path.

steebchen commented 5 years ago

@KevinAtDesignworx If the edge-dual-stack but internal ipv6-only approach can still reach external ipv4 requests from inside a container (i.e. curl -v 93.184.216.34 -H "Host: example.com"), I genuinely think it's the best approach. If your infrastructure can use ipv6, why bother use ipv4 except at the edge for compatibility reasons. However, if this approach means that I can not reach legacy websites solely using ipv4 from inside my cluster, I'm not so sure anymore.

schmitch commented 5 years ago

well there is 464XLAT so ipv6 only inside the container would be feasable.

leblancd commented 5 years ago

@KevinAtDesignworx - If using an ingress controller would work in your scenario, it's possible to configure an NGINX ingress controller for dual-stack operation from outside (proxying to single-family inside the cluster): https://github.com/leblancd/kube-v6#installing-a-dual-stack-ingress-controller-on-an-ipv6-only-kubernetes-cluster

The ingress controllers would need to run on the host network on each node, so the controllers wold need to be set up as a daemonset (one ingress controller on each node). This assumes:

This would be in addition to a NAT64/DNS64 for connections from V6 clients inside the cluster to external IPv4-only servers.

Stateless NAT46 is also an option, but I haven't tried that, so I don't have any config guides for that.

kacole2 commented 5 years ago

@leblancd any work planned here for 1.15? Looks like a KEP hasn't been accepted yet at this point either. Thanks!

GeorgeGuo2018 commented 5 years ago

@leblancd Wanted to follow up on your prior comment relative to creating a delineation at the edge of the cluster for IPv4/IPv6:

“We're also considering an intermediate "dual-stack at the edge" (with IPv6-only inside the cluster) approach, where access from outside the cluster to K8s services would be dual-stack, but this would be mapped (e.g. via NGINX ingress controller) to IPv6-only endpoints inside the cluster (or use stateless NAT46). Pods and services in the cluster would need to be all IPv6, but the big advantage would be that dual-stack external access would be available much more quickly from a time-to-market perspective.”

This use case would be a good one for a current project, so wanted to see your thoughts around timeframe, see if there was anything myself or someone in our group could contribute to help out with this quicker time to market path.

From inside a container(which is only ipv6) sendding out a curl request (i.e. curl -v 93.184.216.34 -H "Host: example.com") to the outside of the cluster. I think it will give out an error of unknow destination or destination unreachable, unless there exist an ipv4 route on the host where the container exist.

schmitch commented 5 years ago

@GeorgeGuo2018 if k8s would implement DNS64/NAT64 it would work. it heavily depends on how far k8s will go into 464xlat/plat solutions and what would need to be handled at edge routers, etc...

actually I think it would be possible by using a DaemonSet/Deployment that uses host networking and Tayga inside the kube-system namespace so that the internal DNS64 would use tayga to go outside the network.

chrisnew commented 5 years ago

Sounds like a solution to me.

We run an IPv6-only network internally and NAT64/DNS64 works quite well for us. For some legacy stuff where there was no IPv6 support at all, we ended up using clatd directly where it was needed. (In our case directly on a VM.)

lachie83 commented 5 years ago

@kacole2 - I would like this tracked for 1.15. I'm working to get the following PR merged - https://github.com/kubernetes/enhancements/pull/808

Specifically for 1.15 we would be adding support for the following:

cmluciano commented 5 years ago

cc @caseydavenport for milestone tracking ^

lachie83 commented 5 years ago

@kacole2 the KEP is now merged. Let me know if there is anything else we need to get this tracked in 1.15

simplytunde commented 5 years ago

Hey @leblancd @lachie83 Just a friendly reminder we're looking for a PR against k/website (branch dev-1.15) due by Thursday, May 30. It would be great if it's the start of the full documentation, but even a placeholder PR is acceptable. Let me know if you have any questions!

GeorgeGuo2018 commented 5 years ago

@kacole2 the KEP is now merged. Let me know if there is anything else we need to get this tracked in 1.15

@lachie83 Hi,Lachie,did you mean the IPv4/IPv6 dual-stack support this KEP was finished?

GeorgeGuo2018 commented 5 years ago

@kacole2 the KEP is now merged. Let me know if there is anything else we need to get this tracked in 1.15

Actually, I want to figure out whether dual stack support will surely be add in k8s 1.15.

simplytunde commented 5 years ago

@leblancd The placeholder PR against k8s.io dev-1.15 is due Thursday May 30th.

GeorgeGuo2018 commented 5 years ago

@leblancd The placeholder PR against k8s.io dev-1.15 is due Thursday May 30th.

Could I consider that dual-stack support will be available in release-1.15?

simplytunde commented 5 years ago

@GeorgeGuo2018 It is still on the enhancement sheet for 1.15 but only enhancement lead @kacole2 can provide you with better details on that.

kacole2 commented 5 years ago

Hi @lachie83 @leblancd. Code Freeze is Thursday, May 30th 2019 @ EOD PST. All enhancements going into the release must be code-complete, including tests, and have docs PRs open.

Please list all current k/k PRs so they can be tracked going into freeze. If the PRs aren't merged by freeze, this feature will slip for the 1.15 release cycle. Only release-blocking issues and PRs will be allowed in the milestone.

I see kubernetes/kubernetes#62822 in the original post is still open. Are there other PRs we are expecting to be merged as well?

If you know this will slip, please reply back and let us know. Thanks!

lachie83 commented 5 years ago

@simplytunde - Appreciate the heads up. I am working on getting the docs PR together this week.

lachie83 commented 5 years ago

@GeorgeGuo2018 - This is going to be a multi-release KEP. We plan on landing phase 1 in 1.15. Please take a look at the implementation plan in the KEP for further detail - https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/20180612-ipv4-ipv6-dual-stack.md#implementation-plan.

lachie83 commented 5 years ago

@simplytunde - I've created the initial placeholder docs PR here with a WIP https://github.com/kubernetes/website/pull/14600. I plan to complete and have it ready for review over the next couple of days.

lachie83 commented 5 years ago

@kacole2 Thanks for the ping. I've updated the 1.15 enhancements speadsheet with the k/k PR that we are tracking (https://github.com/kubernetes/kubernetes/pull/73977) along with the draft docs PR (https://github.com/kubernetes/website/pull/14600). We are still currently on track to get this PR merged before code freeze. LMK if I'm missing anything else

lachie83 commented 5 years ago

@kacole2 after discussion with @claurence and the release team we've decided to remove this from the 1.15 milestone. Please go ahead and remove it and update the spreadsheet as appropriate. Thanks for all your assistance thus far.

kacole2 commented 5 years ago

/milestone clear

lachie83 commented 5 years ago

@simplytunde I've also commented on the docs PR. Can you please make sure that's removed from the 1.15 milestone also?