Open Mahmoud-Emad opened 1 month ago
root@m1:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m1 Ready control-plane,master 3h15m v1.31.0+k3s1 10.20.4.2 <none> Ubuntu 24.04.1 LTS 6.1.21 containerd://1.7.20-k3s1
w1 Ready <none> 3h15m v1.31.0+k3s1 10.20.5.2 <none> Ubuntu 24.04.1 LTS 6.1.21 containerd://1.7.20-k3s1
w2 NotReady <none> 84m v1.31.0+k3s1 185.69.166.162 <none> Ubuntu 24.04.1 LTS 6.1.21 containerd://1.7.20-k3s1
w4 Ready <none> 16m v1.31.0+k3s1 185.69.166.151 <none> Ubuntu 24.04.1 LTS 6.1.21 containerd://1.7.20-k3s1
w4
with the same flist but connect to the cluster with master's public ipv6
export K3S_URL=https://[2a02:1802:5e:0:b80a:a7ff:fe29:f0d0]:6443
export K3S_FLANNEL_IFACE=eth1
export K3S_DATA_DIR=/mnt/data/
export K3S_TOKEN=WDMTQ2ecuf
export EXTRA_ARGS="--data-dir $K3S_DATA_DIR --kubelet-arg=root-dir=$K3S_DATA_DIR/kubelet"
k3s agent --flannel-iface $K3S_FLANNEL_IFACE $EXTRA_ARGS
from logs worker w4
run in dual stack mode
[+] k3s: time="2024-09-29T13:07:04Z" level=info msg="Annotations and labels have been set successfully on node: w4"
[+] k3s: time="2024-09-29T13:07:04Z" level=info msg="Starting flannel with backend vxlan"
[+] k3s: I0929 13:07:04.369697 189 server.go:677] "Successfully retrieved node IP(s)" IPs=["185.69.166.151","2a02:1802:5e:0:d84b:36ff:fe34:7903"]
[+] k3s: E0929 13:07:04.369931 189 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
[+] k3s: I0929 13:07:04.381571 189 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
[+] k3s: time="2024-09-29T13:07:04Z" level=info msg="Using dual-stack mode. The interface eth1 with ipv4 address 185.69.166.151 and ipv6 address 2a02:1802:5e:0:d84b:36ff:fe34:7903 will be used by flannel"
then it tries to connect to the master over the private ipv4
[+] k3s: E0929 13:07:04.600729 189 cleanup.go:70] "Failed to delete stale service connections" err="error deleting connection tracking state for UDP service IP: 10.43.0.10, error: conntrack command returned: \"conntrack v1.4.7 (conntrack-tools): Operation failed: invalid parameters\\n\", error message: exit status 1" IP="10.43.0.10"
[+] k3s: E0929 13:07:04.682574 189 cleanup.go:70] "Failed to delete stale service connections" err="error deleting connection tracking state for UDP service IP: 2001:cafe:43::a, error: conntrack command returned: \"conntrack v1.4.7 (conntrack-tools): Operation failed: invalid parameters\\n\", error message: exit status 1" IP="2001:cafe:43::a"
[+] k3s: time="2024-09-29T13:07:04Z" level=info msg="Connecting to proxy" url="wss://10.20.4.2:6443/v1-k3s/connect"
[+] k3s: time="2024-09-29T13:07:04Z" level=error msg="Failed to connect to proxy. Empty dialer response" error="dial tcp 10.20.4.2:6443: connect: no route to host"
[+] k3s: time="2024-09-29T13:07:04Z" level=error msg="Remotedialer proxy error; reconnecting..." error="dial tcp 10.20.4.2:6443: connect: no route to host" url="wss://10.20.4.2:6443/v1-k3s/connect"
.
.
.
[+] k3s: I0929 13:07:05.590558 189 iptables.go:372] bootstrap done
[+] k3s: I0929 13:07:05.590643 189 iptables.go:372] bootstrap done
[+] k3s: I0929 13:07:05.592143 189 iptables.go:372] bootstrap done
[+] k3s: time="2024-09-29T13:07:05Z" level=info msg="Connecting to proxy" url="wss://10.20.4.2:6443/v1-k3s/connect"
[+] k3s: time="2024-09-29T13:07:05Z" level=error msg="Failed to connect to proxy. Empty dialer response" error="dial tcp 10.20.4.2:6443: connect: no route to host"
[+] k3s: time="2024-09-29T13:07:05Z" level=error msg="Remotedialer proxy error; reconnecting..." error="dial tcp 10.20.4.2:6443: connect: no route to host" url="wss://10.20.4.2:6443/v1-k3s/connect"
[+] k3s: time="2024-09-29T13:07:06Z" level=info msg="Connecting to proxy" url="wss://10.20.4.2:6443/v1-k3s/connect"
[+] k3s: time="2024-09-29T13:07:06Z" level=error msg="Failed to connect to proxy. Empty dialer response" error="dial tcp 10.20.4.2:6443: connect: no route to host"
[+] k3s: time="2024-09-29T13:07:06Z" level=error msg="Remotedialer proxy error; reconnecting..." error="dial tcp 10.20.4.2:6443: connect: no route to host" url="wss://10.20.4.2:6443/v1-k3s/connect"
[+] k3s: time="2024-09-29T13:07:07Z" level=info msg="Connecting to proxy" url="wss://10.20.4.2:6443/v1-k3s/connect"
[+] k3s: time="2024-09-29T13:07:07Z" level=error msg="Failed to connect to proxy. Empty dialer response" error="dial tcp 10.20.4.2:6443: connect: no route to host"
[+] k3s: time="2024-09-29T13:07:07Z" level=error msg="Remotedialer proxy error; reconnecting..." error="dial tcp 10.20.4.2:6443: connect: no route to host" url="wss://10.20.4.2:6443/v1-k3s/connect"
root@m1:~# cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
kubectl apply -f nginx-deployment.yaml
root@m1:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-54b9c68f67-vt6xt 1/1 Running 0 34m 10.42.0.7 m1 <none> <none>
nginx-deployment-54b9c68f67-xr5sh 1/1 Running 0 34m 10.42.1.7 w1 <none> <none>
kubectl expose deployment nginx-deployment --port=80 --target-port=80 --name=nginx-service
- Adding portforward
kubectl port-forward --address :: svc/nginx-service 8080:80
- Tested from an ipv6 vm
root@imagecreator:~# curl -g -6 "http://[2a04:f340:c0:71:a41f:85ff:fe3c:a546]:8080" <!DOCTYPE html>
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
- Tested from a browser
![image](https://github.com/user-attachments/assets/ab9562fe-68bf-4308-869b-e5cab7bf3d12)
- Modified the deployment template to deploy only in the worker node `w1` and tested connection again and worked fine.
![image](https://github.com/user-attachments/assets/23b970b3-5c47-43dc-b3bd-a3c4147e115a)
![image](https://github.com/user-attachments/assets/ad3c7e34-e826-4028-bb35-39ae7473dbd5)
The first deployment with m1
and w1
worked because both nodes had IPv4 addresses and the K3s cluster is configured for IPv4-only pod networking (10.42.x.x/16
). The internal pod communication was handled over IPv4, while the external traffic (e.g., accessing the Nginx service) used IPv6 through the master node's public IPv6 address via port forwarding.
Adding new worker w2
which only has ipv6 no private ipv4 on eth1 failed with error
ansitionTime":"2024-09-30T14:04:18Z","reason":"KubeletNotReady","message":"CSINode is not yet initialized"}
INFO[0005] Flannel found PodCIDR assigned for node w2
ERRO[0005] flannel exited: failed to find the interface: failed to find IPv4 address for interface eth1
The current image version does not enable the 'DualStack' but 'SingleStack', which provides access to the master/workers only over public IPv4. This was requested in https://github.com/threefoldtech/tfgrid-sdk-ts/issues/3149 to support managing gateways on Kubernetes and can include IPv6 normally.
after some investigations, I found that we should pass the flag while running the k3s server as well as mentioned in https://github.com/k3s-io/k3s/issues/4400