Closed tagurus closed 3 months ago
Additional expirement shows, that traffic goes throw internal network int, but it loks like native cni routing mechanism, without hcloud route controllers.
New cluster without ccm. calico cni installed
pod to pod tracing
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if31: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1400 qdisc noqueue qlen 1000
link/ether 1e:26:bb:80:c7:c9 brd ff:ff:ff:ff:ff:ff
inet 10.2.154.4/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::1c26:bbff:fe80:c7c9/64 scope link
valid_lft forever preferred_lft forever
traceroute 10.2.44.2
traceroute to 10.2.44.2 (10.2.44.2), 30 hops max, 46 byte packets
1 static.179.167.109.65.clients.your-server.de (65.109.167.179) 0.028 ms 0.012 ms 0.010 ms
2 10.2.44.0 (10.2.44.0) 0.586 ms 0.824 ms 0.770 ms
3 10.2.44.2 (10.2.44.2) 0.025 ms 0.491 ms 0.467 ms
result: It works, but doesn't meet expectations
for comparison flannel pod-to pod with ccm and routes
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if399: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 56:cb:89:2a:d6:b3 brd ff:ff:ff:ff:ff:ff
inet 192.168.225.71/24 brd 192.168.225.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::54cb:89ff:fe2a:d6b3/64 scope link
valid_lft forever preferred_lft forever
traceroute 192.168.226.175
traceroute to 192.168.226.175 (192.168.226.175), 30 hops max, 46 byte packets
1 192.168.225.1 (192.168.225.1) 0.005 ms 0.004 ms 0.002 ms
2 192.168.226.0 (192.168.226.0) 0.787 ms 0.413 ms 0.304 ms
3 192.168.226.175 (192.168.226.175) 0.406 ms 0.293 ms 0.283 ms
deal with it, calico works with bgp and without encapsulation
TL;DR
I have sample 2node k8s cluster with internal network kube version 1.28.10 ccm 1.20.0 Previous i have used flannel cni with vxlan mode and it is works with ccm correctly, route created, and traffic goes throw internal network.
now i try to use calico with vxlan crossubnet. Network is up, but ccm doesnt create route on hetzner network, traffic goes throw default external gateway
i have read some simmilar issue, but don't have idea how to fix it
Expected behavior
ccm creates route on hetzner cloud, pod-to-pod traffic goes throw internal networks
Observed behavior
ccm doesnt create route, traffic goes throw external network interface (default route)
Minimal working example
No response
Log output
Additional information
No response