txn2 / kubefwd

Bulk port forwarding Kubernetes services for local development.
https://imti.co/kubernetes-port-forwarding/
Apache License 2.0
3.78k stars 205 forks source link

[Discussion] Exposing the forwarded ports to a different pod in K8s #245

Open isurulucky opened 2 years ago

isurulucky commented 2 years ago

First of all, kubefwd is a really great tool!

The usecase I'm trying out with kubefwd is something similar to what is discussed at https://github.com/txn2/kubefwd/issues/214. I would like to run kubefwd in a pod and consume the exposed services from a different pod. The internal services are in a different cluster.

tunneling-k8s-kubefwd-discuss

The reasons for not directly running kubefwd in the same pod as the client is as follows:

As kubefwd does not support binding to ip addresses other than loopbacks, I used a iptables query to forward traffic arriving on eth0 interface to the relevant loopback IP:

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 9090 -j DNAT --to-destination 127.1.27.1:9090

Generically this iptables command needs to know the port bound to the k8s endpoint and the relevant loopback IP and the forwarded port. Both are same (9090) above. Also, thinking of using k8s liveness probes with a simple telnet command to know when a portforwarded has restarted - The kubefwd pod will be restarted by K8s, and hence refreshing the endpoint configuration. There could be multiple services port forwarded and exposed in different loopback ip addresses. Therefore, if this approach to be successful, would need to dynamically discover what is the correct ip address for each service. Atm I do not have a good way of doing it, but a potentially hacky way would be to do a grep in the modified /etc/hosts file to find the relevant IP - but I guess would have to wait till the kubefwd process does the /etc/host file modification (maybe can run kubefwd in a init container and share with the main container so that /etc/host changes are already done when the main container is started?) to do the IP extraction and iptable changes.

An alternative approach is to use a k8s ingress controller here to expose the internal services privately.

What do you all think about this workflow regarding kubefwd? I do understand this is not exactly the core usecase of kubefwd, but would greatly appreciate suggestions, ideas about drawbacks and possible pitfalls.