Open tanvp112 opened 3 years ago
Hi @tanvp112 , what you're seeing is Calico allowing host processes to reach workloads/pods on that host. This is so health checks will work. More info about this on this page: https://docs.projectcalico.org/security/protect-hosts#default-behavior-of-external-traffic-tofrom-host
Ok, so this is expected behavior. I understood that default endpoint to host action can be set to prevent workload to local host connection, in the case where we don't use kubelet to do health check, is there a way to prevent the opposite like the behavior above?
There isn't a way to prevent that host to local pod traffic with Felix. If you're running Calico on Kubernetes then maybe you could use custom iptables rules to do what you want. Check out the Calico Users slack, it's possible someone has done something similar: https://slack.projectcalico.org/
I think it's a reasonable enhancement request to be more selecting in the traffic that we allow by default.
We only need to allow enough for health checks to pass - right now we do that by allowing from the host, but if we could further restrict (say, limit to only the kubelet) that would be a big improvement.
yes, a truly zero trust network.
Hi @tanvp112 , what you're seeing is Calico allowing host processes to reach workloads/pods on that host. This is so health checks will work. More info about this on this page: https://docs.projectcalico.org/security/protect-hosts#default-behavior-of-external-traffic-tofrom-host
hi @lmm , I wonder to know how does Calico allow host processes to reach workloads/pods on that host? Also using iptables? I read the doc you provided in this comment and find that
By default, Calico blocks all connections from a workload to its local host
But I test this in my cluster and find that we can reach the host from a pod on this host. I have no networkpolicy on my cluster.
[root@host] callicoctl version
Client Version: v3.20.2
Git commit: dcb4b76a
Cluster Version: v3.20.2
Cluster Type: k8s,bgp,kubeadm,kdd,typha
I think it's a reasonable enhancement request to be more selecting in the traffic that we allow by default.
We only need to allow enough for health checks to pass - right now we do that by allowing from the host, but if we could further restrict (say, limit to only the kubelet) that would be a big improvement.
Hi @caseydavenport, is this being planned for a future release?
Hi, I have a 1.20.4 cluster with 1 master & 2 worker nodes with calico (3.18) vxlan network. I deployed 5 nginx to kube-public like this:
As master1 is tainted, nginx is deployed to worker1 & worker2 as expected. There is a service (10.96.184.228) front for the nginx pods. I then executed a for loop 100 times accessing the service (10.96.184.228), pod (10.10.45.222) running on worker1 node and pod (10.10.58.202) running on worker2. All tests (repeated on master1, worker1 and worker2) were successfully with nginx returned the welcome page 100 times each.
I then deployed calico deny-all GlobalNetworkPolicy like this:
I would expect all denied. Then I re-executed the for loops on every node and found that:
I deleted the deny-all GlobalNetworkPolicy and deployed a namespaced version; the result is the same.
I went on to delete previously deployed policy and deployed a more specific calico policy like this:
Hoping that only master1 (172.16.1.10) can access nginx running in kube-public; it turned out that along with master1; calls that were made on worker1/worker2 succeeded for pod running on the same node despite that the worker IP was denied in the policy.
Clearly, these policies have no effect when the connection was made on the same node as the pod is running; be it a direct connect (eg. curl) or routed by kube-proxy (eg. via service).
Is this behavior expected?