Closed cerebrate closed 6 months ago
This issue is currently awaiting triage.
If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
/sig network
Clarification: this does work using a default IPv4 configuration. It's only bringing IPv6 into it that seems to make it fail.
Can you validate from the nodes that plain connectivity works and you are able to connect to the apiserver? First try the advertised address
curl -k -v https://[fdc9:b01a:9d26:0:8aae:ddff:fe0a:99d8]:6443
and if it does work try the service address
curl -k -v https://[fdc9:b01a:cafe:60::1]:443
Connecting to the API server at the advertised address works fine; from the service address does not.
hmm, one thing is weird
I0309 23:20:38.220948 1 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
but it seems you get your ip6tables rules correctly
-A KUBE-SERVICES -d fdc9:b01a:cafe:60::1/128 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
goes to
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s fdc9:b01a:cafe::/56 -d fdc9:b01a:cafe:60::1/128 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ -A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https -> [fdc9:b01a:9d26:0:8aae:ddff:fe0a:99d8]:6443"
and to
-A KUBE-SEP-ZW3YEZJQTUKK7ANJ -s fdc9:b01a:9d26:0:8aae:ddff:fe0a:99d8/128 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ -A KUBE-SEP-ZW3YEZJQTUKK7ANJ -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination [fdc9:b01a:9d26:0:8aae:ddff:fe0a:99d8]:6443
so it should redirect the traffic
do you have ipv6 forwarding enabled?
sysctl -w net.ipv6.conf.all.forwarding=1
we run IPv6 only CI using kubecadm with kind and it is working https://testgrid.k8s.io/conformance-kind#kind%20(IPv6),%20master%20(dev)
I do:
cluster@princess-celestia:~$ cat /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
cluster@princess-celestia:~$ cat /proc/sys/net/ipv6/conf/all/forwarding
1
(same results on all nodes). If the iptables are doing the right thing (I must confess I'm not as up on iptables as I ought to be.) then - well, it's a puzzler to me.
I've had clusters working in the past (k8s 1.27, earlier versions of Debian bookworm) dual-stack with IPv6 primary myself, too, which only makes it more confusing to me. Not like I've suddenly changed my setup procedure, just updated the versions of the software involved.
ok, let's start over, can you do paste the the versions of the components involved and images that has changed in a working cluster and a failing one?
Without having looked at this in much detail: the fact that there are weave-related rules in the ipv4 dump but not in the ipv6 dump seems suspicious. Is it possible you configured kube-proxy for dual-stack but configured your CNI plugin for single-stack?
/close no reply... if this is still a problem please reopen and add more information
@danwinship: Closing this issue.
What happened?
When setting up a new cluster using Kubernetes 1.29.2 on Debian 12.5 ("bookworm"), it appears that the necessary iptables entries to permit access to services, etc., are not being created by kube-proxy. Upon reaching the step in setting up the first control-plane node, post
kubeadm init
, at which it is necessary to add a networking option, the pods of the networking option invariably fail complaining that it is impossible to reach the Kubernetes API server via the Kubernetes service.Things at this point appear normal except for the failed networking option pod:
This example is from Weave, but the equivalent error also occurs in Flannel, leading me to conclude that the issue is not with them:
The kube-proxy pod log shows no calls to iptables:
and while the chains and some relevant entries are seen, the essential ones appear to be missing, per the following output from ip6tables-save and iptables-save:
What did you expect to happen?
Once
kubeadm init
had completed, installation of a network option should proceed and complete normally; it (and other pods) should be able to access the kubernetes service.How can we reproduce it (as minimally and precisely as possible)?
Rather than repeat the details of every command:
Take a vanilla, minimal Debian 12.5 installation, add containerd as the runtime, and then
kubeadm init
. Specifically, I use the cluster configuration file:to set up for IPv6 networking, using the command
kubeadm init --config ./cluster.conf
, although using different subnet configurations makes no difference.Anything else we need to know?
No response
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)