kubernetes-sigs / kubespray

Deploy a Production Ready Kubernetes Cluster
Apache License 2.0
16.22k stars 6.5k forks source link

When can we expect ipv6-only support ? #5649

Closed etavene closed 3 years ago

etavene commented 4 years ago

What would you like to be added: I would like to bring up ipv6-only cluster using kubespray Why is this needed: Kubernetes supports ipv6-only configuration with both iptables and ipvs.

starcraft66 commented 4 years ago

I'm looking to do the same and it seems that this playbook is totally broken as it makes ipv4-only assumptions.

For example, this preinstall check fails even though there is an ipv6 address available on the host.

fatal: [node1]: FAILED! => {
    "assertion": false,
    "changed": false,
    "evaluated_to": false,
    "msg": "Do not schedule more pods on a node than inet addresses are available."
}
fatal: [node2]: FAILED! => {
    "assertion": false,
    "changed": false,
    "evaluated_to": false,
    "msg": "Do not schedule more pods on a node than inet addresses are available."
}
fatal: [node3]: FAILED! => {
    "assertion": false,
    "changed": false,
    "evaluated_to": false,
    "msg": "Do not schedule more pods on a node than inet addresses are available."
}
eleblebici commented 4 years ago

I'm looking to do the same and it seems that this playbook is totally broken as it makes ipv4-only assumptions.

For example, this preinstall check fails even though there is an ipv6 address available on the host.

fatal: [node1]: FAILED! => {
    "assertion": false,
    "changed": false,
    "evaluated_to": false,
    "msg": "Do not schedule more pods on a node than inet addresses are available."
}
fatal: [node2]: FAILED! => {
    "assertion": false,
    "changed": false,
    "evaluated_to": false,
    "msg": "Do not schedule more pods on a node than inet addresses are available."
}
fatal: [node3]: FAILED! => {
    "assertion": false,
    "changed": false,
    "evaluated_to": false,
    "msg": "Do not schedule more pods on a node than inet addresses are available."
}

Right! I also think so. I tried creating an IPv6 only cluster on Fedora Coreos and met an error about etcd. Error is same with which mentioned here

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

starcraft66 commented 4 years ago

/remove-lifecycle stale

-Tristan

On Tue., Jul. 14, 2020, 03:28 fejta-bot, notifications@github.com wrote:

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta https://github.com/fejta. /lifecycle stale

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/kubespray/issues/5649#issuecomment-658019065, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOFU2QMUJK6CYTKYCVD4HDR3QCLBANCNFSM4KWOKM6A .

error10 commented 4 years ago

Kubernetes is dual stack now; kubespray should also support IPv6-only or dual stack deployments. I'll be trying to work on this a bit as we can't go into production without dual stack support.

BabisK commented 4 years ago

Kubernetes is dual stack now; kubespray should also support IPv6-only or dual stack deployments. I'll be trying to work on this a bit as we can't go into production without dual stack support.

I have also started working on that; adding support for IPv6 only and dual stack. Do you want to coordinate efforts?

starcraft66 commented 4 years ago

I am currently running single-stack IPv6 kubernetes with kubeadm and would be interested in working on this with you so that I can deploy with kubespray instead.

samsalmi commented 3 years ago

Me too, I'm interested to run use kubespray with IPv6 only. Is any progress in this?

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

starcraft66 commented 3 years ago

/remove-lifecycle stale

On Thu, Mar 18, 2021 at 9:14 PM fejta-bot @.***> wrote:

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community https://github.com/kubernetes/community. /lifecycle stale

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes-sigs/kubespray/issues/5649#issuecomment-802435720, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOFU2SBBEYNPQIZEP6ZRQDTEKQPLANCNFSM4KWOKM6A .

trickert76 commented 3 years ago

My current workaround is to define a wireguard interface on all IPv6-only-hosts with an wg0-ipv4-address. Of course - this should not be the correct solution for this issue. It would be much better to have a "native" IPv6-only support via kubespray.

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-ci-robot commented 3 years ago

@k8s-triage-robot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kubespray/issues/5649#issuecomment-950163718): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues and PRs according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue or PR with `/reopen` >- Mark this issue or PR as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
starcraft66 commented 3 years ago

/reopen

k8s-ci-robot commented 3 years ago

@starcraft66: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to [this](https://github.com/kubernetes-sigs/kubespray/issues/5649#issuecomment-950189513): >/reopen > Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
troodes commented 1 year ago

/remove-lifecycle rotten

belohnung commented 1 week ago

can this be reopened? or is there a new issue for this / has it been fixed?