Open dapeng09 opened 4 years ago
Hi, When i add these routes on each workers, my pods can't reach outside world.
ip route add $worker1_pod_subnet via $worker1_ip
ip route add $worker2_pod_subnet via $worker2_ip
ip route add $worker3_pod_subnet via $worker3_ip
I did some tests and found that the route to reach the current pod_subnet
is conflicting with an existing one.
If you run a pod with crictl manually before adding routes on each, it will creates a default one as
$pod_subnet dev cnio0 proto kernel scope link src $pod_subnet_gateway
More concretly i get this
root@tspeda-k8s-worker3 ~# ip route
default via 162.38.60.100 dev ens3 proto static
10.200.204.0/24 via 162.38.60.206 dev ens3
10.200.205.0/24 via 162.38.60.206 dev ens3
# The next one is automatically created
10.200.206.0/24 dev cnio0 proto kernel scope link src 10.200.206.1
162.38.60.0/24 dev ens3 proto kernel scope link src 162.38.60.206
# If you add this route, it will break cnio0 routing
# 10.200.206.0/24 via 162.38.60.206 dev ens3
So for each of you workers don't add the current pod subnet route.
Ok, my routing table is wrong. To reach other pod subnet, the gateway needs to be ip of the hosting worker, not the current one. I explained well in the beginning of my last comment, but i applied wrong on my nodes.
Here what's works for me:
➜ ansible workers -m shell -a "ip route"
tspeda-k8s-worker2 | CHANGED | rc=0 >>
default via 162.38.60.100 dev ens3 proto static
10.200.204.0/24 via 162.38.60.204 dev ens3
10.200.205.0/24 dev cnio0 proto kernel scope link src 10.200.205.1
10.200.206.0/24 via 162.38.60.206 dev ens3
162.38.60.0/24 dev ens3 proto kernel scope link src 162.38.60.205
tspeda-k8s-worker1 | CHANGED | rc=0 >>
default via 162.38.60.100 dev ens3 proto static
10.200.204.0/24 dev cnio0 proto kernel scope link src 10.200.204.1
10.200.205.0/24 via 162.38.60.205 dev ens3
10.200.206.0/24 via 162.38.60.206 dev ens3
162.38.60.0/24 dev ens3 proto kernel scope link src 162.38.60.204
tspeda-k8s-worker3 | CHANGED | rc=0 >>
default via 162.38.60.100 dev ens3 proto static
10.200.204.0/24 via 162.38.60.204 dev ens3
10.200.205.0/24 via 162.38.60.205 dev ens3
10.200.206.0/24 dev cnio0 proto kernel scope link src 10.200.206.1
162.38.60.0/24 dev ens3 proto kernel scope link src 162.38.60.206
➜ k get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
busybox 1/1 Running 20 20h 10.200.206.6 tspeda-k8s-worker3 <none> <none>
busybox2 1/1 Running 0 23m 10.200.204.10 tspeda-k8s-worker1 <none> <none>
busybox3 1/1 Running 0 22m 10.200.205.11 tspeda-k8s-worker2 <none> <none>
busybox4 1/1 Running 0 22m 10.200.205.12 tspeda-k8s-worker2 <none> <none>
➜ ssh root@tspeda-k8s-worker1
root@tspeda-k8s-worker1 ~# ping -c 1 10.200.206.6
PING 10.200.206.6 (10.200.206.6) 56(84) bytes of data.
64 bytes from 10.200.206.6: icmp_seq=1 ttl=63 time=0.279 ms
--- 10.200.206.6 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms
root@tspeda-k8s-worker1 ~# ping -c 1 10.200.204.10
PING 10.200.204.10 (10.200.204.10) 56(84) bytes of data.
64 bytes from 10.200.204.10: icmp_seq=1 ttl=64 time=0.050 ms
--- 10.200.204.10 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms
root@tspeda-k8s-worker1 ~# ping -c 1 10.200.205.11
PING 10.200.205.11 (10.200.205.11) 56(84) bytes of data.
64 bytes from 10.200.205.11: icmp_seq=1 ttl=63 time=0.997 ms
--- 10.200.205.11 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.997/0.997/0.997/0.000 ms
not working
Hi there, I am following this guide on my virtual machine cluster, and I got stuck at 11-pod-network-routes.md, how can I create the routes so that a pod can ping another pod on a different node? I ran below 3 commands on each of the worker nodes, however it didn't work out.
route add -net 10.200.0.0 netmask 255.255.255.0 gw {worker-0 ip} route add -net 10.200.1.0 netmask 255.255.255.0 gw {worker-1 ip} route add -net 10.200.2.0 netmask 255.255.255.0 gw {worker-2 ip}
any suggestions?
info you might need: OS: Ubuntu 18.04.1 LTS kubernetes: v1.12.0
Thanks