Closed alekc closed 2 years ago
Sorry @alekc for the long response time,
Ubuntu uses the default policy DROP in the FORWARD chain of the filter table.
To test if this is the problem, can you ssh onto the node and run iptables -t filter -P FORWARD ACCEPT
?
If you are using the squat/kilo:latest you can specify --iptables-forward-rules=true
as an arg to kg
.
Note, that in case you have leader nodes (with FORWARS DROP policy) in locations with more then one node, you have to wait until #248 is merged to have full connectivity.
Also I think you should not use the force-endpoint
annotation without port. I think, this will just be ignored.
https://kilo.squat.ai/docs/annotations#force-endpoint
I will reset the node (meanwhile I've proceeded with zerotrust) and see if that's the case. Sadly the #248 might prevent me to adopt the solution anyway, but at very least I will have (or not) the assurance that it works on single node setup.
Took a while for me to test it out.
So, I can preliminary confirm that adding a --iptables-forward-rules=true
fixes the issue with the single node (It feels like it should be mentioned somewhere in the Readme, since it's going to be a major blocker for anyone attempting to install it on Ubuntu).
I will try to deploy a 3 node cluster later on and will see if the connectivity is on expected levels.
Hey @alekc, feel free to reopen or make a PR about mentioning this in the Readme.
I am setting up a cluster, and got an issue when using kilo as the only CNI
Installed kilo with
IPTABLES
``` # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes health check service ports */ KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */ KUBE-FIREWALL all -- anywhere anywhere KILO-IPIP ipencap-- anywhere anywhere /* Kilo: jump to IPIP chain */ DROP ipencap-- anywhere anywhere /* Kilo: reject other IPIP traffic */ Chain FORWARD (policy DROP) target prot opt source destination KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */ KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */ KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */ DOCKER-USER all -- anywhere anywhere DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED DOCKER all -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */ KUBE-FIREWALL all -- anywhere anywhere Chain DOCKER (1 references) target prot opt source destination Chain DOCKER-ISOLATION-STAGE-1 (1 references) target prot opt source destination DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere RETURN all -- anywhere anywhere Chain DOCKER-ISOLATION-STAGE-2 (1 references) target prot opt source destination DROP all -- anywhere anywhere RETURN all -- anywhere anywhere Chain DOCKER-USER (1 references) target prot opt source destination RETURN all -- anywhere anywhere Chain KILO-IPIP (1 references) target prot opt source destination ACCEPT all -- k8s-master-1.subnet09021850.vcn09021850.oraclevcn.com anywhere /* Kilo: allow IPIP traffic */ Chain KUBE-EXTERNAL-SERVICES (2 references) target prot opt source destination Chain KUBE-FIREWALL (2 references) target prot opt source destination DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000 DROP all -- !localhost/8 localhost/8 /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT Chain KUBE-FORWARD (1 references) target prot opt source destination DROP all -- anywhere anywhere ctstate INVALID ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000 ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED Chain KUBE-KUBELET-CANARY (0 references) target prot opt source destination Chain KUBE-NODEPORTS (1 references) target prot opt source destination Chain KUBE-PROXY-CANARY (0 references) target prot opt source destination Chain KUBE-SERVICES (2 references) target prot opt source destination ```
My understanding is that at this point (especially since we are just 1 single node), it should still be working
However, this is what's happening:
If I remove the kilo and install flannel for example
everything works. If I go backwards (delete flannel, cni config, install kilo, reboot) networking is not working anymore
p.s.