projectcalico / calico

Cloud native networking and network security
https://docs.tigera.io/calico/latest/about/
Apache License 2.0
6.02k stars 1.34k forks source link

Add support for networking non-cluster hosts with Calico #3407

Open spikecurtis opened 4 years ago

spikecurtis commented 4 years ago

Expected Behavior

If hosts outside the K8s cluster need to communicate directly with pods inside the cluster, the only option today is to peer Calico with the underlying network infrastructure. There are cases where that is not desirable or possible (e.g. public clouds). It would be useful to allow calico-node to run on non-cluster hosts, and set up networking including overlay tunnels.

Current Behavior

calico-node expects there to be a node resource. In KDD mode, that corresponds to a K8s Node, which won't exist for non-cluster hosts. In etcdv3 mode, kube-controllers deletes any Calico resources that don't correspond to K8s nodes. The overall effect is that in either case, calico-node needs there to be a Kubernetes node resource.

Possible Solution

New non-cluster node resource that tracks relevant information.

Steps to Reproduce (for bugs)

1. 2. 3. 4.

Context

Use case is from #3397

Your Environment

mleklund commented 4 years ago

I want to point out that the documentation makes this appear a possible and supported use case.

https://docs.projectcalico.org/getting-started/bare-metal/about

Secure hosts not in a cluster by installing Calico with networking and/or networking policy enabled.
caseydavenport commented 4 years ago

In etcdv3 mode, kube-controllers deletes any Calico resources that don't correspond to K8s nodes. The overall effect is that in either case,

This shouldn't be the case - IIUC this works in etcdv3 mode, and this issue is just re: K8s API mode, where there are no node objects representing the external nodes.

Some ideas:

mleklund commented 4 years ago

I added a dummy node and got calico working. I thought I was home free, but I could not reach kubernetes services. I understand it is probably a non-goal, but I was hoping for a quick and easy way to connect legacy docker hosts to K8s.

caseydavenport commented 4 years ago

but I could not reach kubernetes services

How are you trying to access the Kubernetes service?

mleklund commented 4 years ago

I was using IP address, I walked through https://docs.projectcalico.org/networking/advertise-service-ips, but that is for external bgp peers apparently, and since I went the fake node route, my external node was seen as a k8s node, and it was assumed it would have kube-proxy.

This particular environment I wanted to do this without TOR BGP because of equipment limitations. FWIW, I see a supported way to tie legacy docker hosts to k8s services as a real need in the community where more and more networking is software defined. Sure I could go through the work of installing bird on every single host, but if I could easily setup a docker container to "join" the calico network it would be amazing.

caseydavenport commented 4 years ago

but if I could easily setup a docker container to "join" the calico network it would be amazing.

Yep, I agree. This works using etcdv3 as the datastore, but should also be possible using the Kubernetes API, which is where we want to take this issue.

I walked through https://docs.projectcalico.org/networking/advertise-service-ips, but that is for external bgp peers apparently

Yes, interesting. Potentially this would work if Calico was running on the node but configured as an explicit BGP peer and not part of the mesh. Might be possible to do this.

and it was assumed it would have kube-proxy.

Yeah, if you're not getting the routing information propagated in some other way, then kube-proxy will need to run on the node sourcing the traffic in order to perform the NAT (otherwise clusterIP/Service traffic will just go out the default gateway).

have you tried accessing a pod IP directly to make sure that works in your setup?

mleklund commented 4 years ago

Pod IPs work perfectly. My "hack" for the moment is to join the node to the k8s cluster and then cordon it. This give me the calico daemonset, and kube-proxy daemonset. There are a few other daemonsets I get that I would rather not have, though I suppose I could taint them.

I was also told in slack that we would start to see the service ips in all nodes in 3.15 because of this, it appears to me that you are already aware of that.