kubernetes / enhancements

Enhancements tracking repo for Kubernetes
Apache License 2.0
3.39k stars 1.46k forks source link

Add IPv4/IPv6 dual-stack support #563

Closed leblancd closed 2 years ago

leblancd commented 6 years ago

Feature Description

Corresponding kubernetes/kubernetes Issue: https://github.com/kubernetes/kubernetes/issues/62822

lachie83 commented 4 years ago

We would like this enhancement to be tracked in 1.20. It will be reimplemented in alpha state according to the updated kep - https://github.com/kubernetes/enhancements/pull/1679. Please track the following PR for the implementation - https://github.com/kubernetes/kubernetes/pull/91824. We are planning to complete the review and merge the PR early in the 1.20 release cycle.

dcbw commented 4 years ago

Latest dual-stack graduation to Beta status as discussed in Sept 17th's SIG Network meeting, for those playing along at home:

All these items are being actively worked on, and 1.20 is still the target for dual-stack API Beta graduation. However despite our best efforts there is always a chance something will not be resolved in time, and if so, SIG Network will decide whether to continue graduation to Beta or not in our public meetings. All are welcome to join.

lachie83 commented 3 years ago

@dcbw thank you very much for the update (sorry I couldn't make the call). Does it make sense to get this to enhancement to beta in 1.20 or simply remain in alpha? If we want to go to beta does the graduation criteria in the KEP still make sense given that this is a reimplementation https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/20180612-ipv4-ipv6-dual-stack.md#graduation-criteria

russellb commented 3 years ago

@dcbw thank you very much for the update (sorry I couldn't make the call). Does it make sense to get this to enhancement to beta in 1.20 or simply remain in alpha? If we want to go to beta does the graduation criteria in the KEP still make sense given that this is a reimplementation https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/20180612-ipv4-ipv6-dual-stack.md#graduation-criteria

It's not really a reimplementation, though. All of the previous work is still valid and the work in 1.20 is building on top of it to finalize the last changes needed that have been identified. My interpretation of the sig-network discussion is that the list @dcbw posted is the set of remaining known issues needed to be resolved for graduation.

kikisdeliveryservice commented 3 years ago

Hi all,

1.20 Enhancements Lead here, I'm going to set this as tracked please update me if anything changes :)

As a reminder Enhancements Freeze is October 6th.

As a note the KEP is using an old format we have updated to : https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template

Best, Kirsten

/milestone v1.20

bridgetkromhout commented 3 years ago

Hi, @russellb -

It's not really a reimplementation, though. All of the previous work is still valid and the work in 1.20 is building on top of it to finalize the last changes needed that have been identified.

Given the API changes in https://github.com/kubernetes/kubernetes/pull/91824, enough is different that marking dual-stack as alpha for 1.20 will allow room for any further re-implementations that prove necessary. I know we're all eager for beta, but let's first land the PR with +9,319 −3,261 and let the dust settle. :)

dcbw commented 3 years ago

Given the API changes in kubernetes/kubernetes#91824, enough is different that marking dual-stack as alpha for 1.20 will allow room for any further re-implementations that prove necessary. I know we're all eager for beta, but let's first land the PR with +9,319 −3,261 and let the dust settle. :)

@bridgetkromhout yeah, we need to land https://github.com/kubernetes/kubernetes/pull/91824 before we can make any determination about API readiness. I really hope we can do that ASAP.

kinarashah commented 3 years ago

Hi all,

1.20 Enhancement shadow here 👋

Since this Enhancement is scheduled to be in 1.20, please keep in mind these important upcoming dates: Friday, Nov 6th: Week 8 - Docs Placeholder PR deadline Thursday, Nov 12th: Week 9 - Code Freeze

As a reminder, please link all of your k/k PR as well as docs PR to this issue so we can track them.

Thank you!

lachie83 commented 3 years ago

Hi @kinarashah @kikisdeliveryservice - I have confirmed on the sig-network call that we need this reclassified to alpha for 1.20. It's a complete reimplementation that needs time to soak and be tested in alpha stage.

reylejano commented 3 years ago

Hello @lachie83, 1.20 Docs shadow here.

Does this enhancement work planned for 1.20 require any new docs or modification to existing docs?

If so, please follows the steps here to open a PR against the dev-1.20 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Nov 6th.

Also take a look at Documenting for a release to get yourself familiarize with the docs requirement for the release.

Thank you!

lachie83 commented 3 years ago

Thanks @reylejano-rxm - we've opened kubernetes/website#24725

reylejano commented 3 years ago

Hi @lachie83

Thanks for creating the docs PR!

Please keep in mind the important upcoming dates:

As a reminder, please link all of your k/k PR as well as docs PR to this issue for the release team to track.

kikisdeliveryservice commented 3 years ago

Hi @kinarashah @kikisdeliveryservice - I have confirmed on the sig-network call that we need this reclassified to alpha for 1.20. It's a complete reimplementation that needs time to soak and be tested in alpha stage.

Hey @lachie83

Given the above, I presume that this is still intended for alpha as-is? I don't see any outstanding PRs that need to merge / work was already merged.

_Just a reminder that Code Freeze is coming up in 2 days on Thursday, November 12th. All PRs must be merged by that date, otherwise an Exception is required._

Thanks! Kirsten

bridgetkromhout commented 3 years ago

Hi, @kikisdeliveryservice - yes, IPv4/IPv6 dual-stack support (reimplemented) will be alpha for 1.20.

Here's the progress we have for this enhancement:

1) Code is merged from https://github.com/kubernetes/kubernetes/pull/91824 - will be alpha for 1.20 2) Documentation updates covering that code change are in https://github.com/kubernetes/website/pull/24725/ - reviewed and merged into the dev-1.20 branch

Is there anything else needed for 1.20 that we haven't completed on this enhancement?

kikisdeliveryservice commented 3 years ago

@bridgetkromhout Thanks for the clear update, you're all good!

chenwng commented 3 years ago

It looks like LoadBalancerIP in ServiceSpec is not part of the dual-stack implementation yet. Is there any plan to support it or did I miss it?

lachie83 commented 3 years ago

Hi @chenwng - Changes to cloud provider code for Loadbalancers are out of scope currently as defined in the KEP here - https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/20180612-ipv4-ipv6-dual-stack.md#load-balancer-operation.

You can help by providing your use case and suggested changes to understand and decide if we need to make any modifications to the KEP.

aramase commented 3 years ago

@chenwng There is a KEP being worked on for LoadBalancerIPs in dual-stack clusters - https://github.com/kubernetes/enhancements/pull/1992

chenwng commented 3 years ago

Thanks for the info, @aramase , @lachie83 .

fmuyassarov commented 3 years ago

Hi, Is there concrete plan to move dual-stack to beta in 1.21/1.22 ?

lachie83 commented 3 years ago

Hi @fmuyassarov - We are working with sig-network to determine the plan to move this feature to beta in 1.21. Would you like to see this enhancement go to beta?

bridgetkromhout commented 3 years ago

KEP: 20180612-ipv4-ipv6-dual-stack

KEP has been updated to the new format: https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/563-dual-stack

bridgetkromhout commented 3 years ago

We're planning to move dual-stack from alpha to beta for 1.21.

Updates to this enhancements issue:

fmuyassarov commented 3 years ago

Hi @fmuyassarov - We are working with sig-network to determine the plan to move this feature to beta in 1.21. Would you like to see this enhancement go to beta?

Hi @lachie83 . Yes, would be nice to see dual-stack graduating to beta in 1.21.

JamesLaverack commented 3 years ago

:wave: Hey @lachie83, 1.21 release team enhancements shadow here.

We're currently tracking this KEP for graduation to beta Kubernetes 1.21, and the only issue I can see is the production readiness review. I'm aware there's an ongoing pull request to add this though: https://github.com/kubernetes/enhancements/pull/2327. We'll continue to track this in the meantime.

As one query: The kep.yaml states that SIG Cluster Lifecycle are participating. Do they need to do any work on this, and if so are they signed on for this release?

bridgetkromhout commented 3 years ago

Hi, @JamesLaverack - it looks like SIG Cluster Lifecycle's inclusion may be outdated. Aside from the sig-network involvement, the only other place we might be looking for confirmation of the testing is from sig-testing's kind subproject. I think @aojea can confirm whether we should change https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/563-dual-stack/kep.yaml#L11 to sig-testing or perhaps remove it.

aojea commented 3 years ago
  • sig-cluster-lifecycle

they've implemented some bits in kubeadm, but they should not do anything else, @neolit123 can you confirm?

Aside from the sig-network involvement, the only other place we might be looking for confirmation of the testing is from sig-testing's kind subproject

kind has dualstack as priority for next release https://github.com/kubernetes-sigs/kind/issues/2024 and haa been running periodic job with an "unofficial" version for several months cc: @BenTheElder

Technically we are set, and both sigs are aware, I can't say if we need to do anything else or this is enough

neolit123 commented 3 years ago

@aojea

they've implemented some bits in kubeadm, but they should not do anything else, @neolit123 can you confirm?

for kubeadm the dual stack support is tracked here: https://github.com/kubernetes/kubeadm/issues/1612

i see some remaining tasks in the punch card there and the kubeadm feature gate is still alpha

cc @Arvinderpal do you have time to work on these updates for 1.21, or perhaps you can delegate to someone else from SIG Net or SIG CL?

Arvinderpal commented 3 years ago

@neolit123 I have not been following dual-stack developments recently and I'm not sure if any of the tasks in https://github.com/kubernetes/kubeadm/issues/1612 are still relevant anymore. It may be best if some like @aojea or others from the SIG Net take a look.

aojea commented 3 years ago

replied in the kubeadm issue, I think that only kubeadm docs are missing, rest is good

neolit123 commented 3 years ago

thanks. ok, looks like:

Add documentation on enabling dual-stack via kubadm

is still a viable item.

if the core dual stack feature gate as moving to Beta in 1.21, we should move the kubeadm gate too.

JamesLaverack commented 3 years ago

Hey @lachie83, enhancements 1.21 shadow here again,

Enhancements Freeze is 2 days away, Feb 9th EOD PST

The enhancements team is aware that KEP update is currently in progress (PR #2327). Please make sure to work on PRR questionnaires and requirements and get it merged before the freeze. For PRR related questions or to boost the PR for PRR review, please reach out in Slack on the #prod-readiness channel.

Any enhancements that do not complete the following requirements by the freeze will require an exception.

Thanks all for the clarification on your participating SIGs too. :)

bridgetkromhout commented 3 years ago

Hi, @JamesLaverack - as we discussed on Slack we have https://github.com/kubernetes/enhancements/pull/2327 (our PRR) merged. ~I think according to your checklist we should now be listed as Tracked for 1.21, though right now I still see us listed as At Risk. Thanks.~ Edit: I see we're now tracked! Thanks.

JamesLaverack commented 3 years ago

HI @bridgetkromhout, Thanks for the notification. I've taken a quick look and with that merged you're correct that you've covered everything. I've updated the 1.21 tracking sheet to be "Tracked" for this enhancement.

JamesLaverack commented 3 years ago

Hi @lachie83,

Since your Enhancement is scheduled to be in 1.21, please keep in mind the important upcoming dates:

As a reminder, please link all of your k/k PR(s) and k/website PR(s) to this issue so we can track them.

Thanks!

pacoxu commented 3 years ago

/assign for https://github.com/kubernetes/kubeadm/issues/1612#issuecomment-773906850, I will work on the kubeadm part.

reylejano commented 3 years ago

Doc PR for 1.21 is k/website PR 26675

JamesLaverack commented 3 years ago

Hi @lachie83

Enhancements team is marking this enhancement as "At Risk" for the upcoming code freeze due to not seeing any linked k/k PR(s) for this enhancement. (Unless I've missed them! Please tell me if I have.)

Please make sure to provide all k/k PR(s) and k/website PR(s) to this issue so it can be tracked by the release team.

P.S. Should I be tagging others for updates about this? @bridgetkromhout? @aojea maybe? I only have Lachie down in our spreadsheet as a contact but I'm happy to ping others if requested too.

bridgetkromhout commented 3 years ago

Hi @JamesLaverack this enhancement is intended to graduate from alpha to beta; it's not a new code addition in k/k. We filed the PRR and I'll open the placeholder docs PR. What other updates were you looking for? Thanks.

bridgetkromhout commented 3 years ago

The k/k PR for this change (alpha to beta, putting feature gate on by default) is in https://github.com/kubernetes/kubernetes/pull/98969.

JamesLaverack commented 3 years ago

Hi @bridgetkromhout.

Thank you for the clarification. I wasn't sure what k/k code changes were required and I missed the link for https://github.com/kubernetes/kubernetes/pull/98969. But as that's merged and there are no other changes I've flipped this enhancement back to "Tracked" (from "At Risk") and marked it as done for code freeze.

bridgetkromhout commented 3 years ago

Thanks, @JamesLaverack. I also just submitted a 1.21 docs update in https://github.com/kubernetes/website/pull/26826 and didn't put it in draft, as it's ready for review at this time.

xuzhenglun commented 3 years ago

I wonder that will the service default/kubernetes enable dual-stack in future?


Here is my user case:

I have a dual-stack cluster which primary stack is IPv6. Recent, some machines (IPv4 Only) have to join to the cluster to reuse controle-plane. In this case, Pod scheduled to IPv4-only machine can not connect to kube-apiserver because of default/kubernetes is SingleStack with IPv6 Only.

For now, I implemented a mutation webhook to inject environment variable KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to pod to workaround this issue.

aojea commented 3 years ago

Here is my user case:

I have a dual-stack cluster which primary stack is IPv6. Recent, some machines (IPv4 Only) have to join to the cluster to reuse controle-plane. In this case, Pod scheduled to IPv4-only machine can not connect to kube-apiserver because of default/kubernetes is SingleStack with IPv6 Only.

For now, I implemented a mutation webhook to inject environment variable KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to pod to workaround this issue.

@thockin @danwinship @khenidak ^^^

danwinship commented 3 years ago

Yes, there's an issue for dual-stack apiserver now (#2438) but when I started the initial KEP writeup I realized I needed to figure out more about how the apiserver endpoint reconciling worked, and then I've only just now gotten back to finishing that...

bridgetkromhout commented 3 years ago

@JamesLaverack can you have this issue placed into the 1.22 release milestone and have it tracked for 1.22? We intend for this feature to move from beta to stable in 1.22. Thanks!

wzshiming commented 3 years ago

I have a patch about this KEP, I hope Pod can add the status.hostIPs field, can let Pod know its node's IPv6 address #2661

JamesLaverack commented 3 years ago

can you have this issue placed into the 1.22 release milestone and have it tracked for 1.22? We intend for this feature to move from beta to stable in 1.22. Thanks!

@bridgetkromhout 🎉 That's great. 😄 Can I ask you to get SIG Network to put it on the opt-in list on the 1.22 tracking sheet?

thockin commented 3 years ago

Discussed today - we agreed to wait to 1.23

thockin commented 3 years ago

We have at least one blocking issue - the "PreferDualStack "repair loop".