Closed alonsocamaro closed 1 year ago
Initially this idea suggested using just node-label-selector but after talking with Bart Van Bos he showed me his concern that the node could actually not host the POD, hence I changed it to a daemonset-label-selector which is a common way to create PODs with hostPort/hostNetwork in their spec (NGINXplus uses this approach).
In the case of Openshift the HAproxy PODs are created using a deployment so we should also also support an analgous deployment mode that in the case of Openshift would match on the existing ingresscontroller.operator.openshift.io/deployment-ingresscontroller label
@alonsocamaro Please setup a meeting with PM team to discuss this feature-request.
@alonsocamaro Since CIS is a Control Plane and BIGIP handles the incoming traffic, Can you give more details how this RFE can help CIS ?
replied @trinaths via teams
It's been a year, any updates? This would be very useful, considering RKE2 and K3S uses hostPorts as their way of exposing Ingress Controllers.
Not planned in CIS. Closing this issue.
Title
support for hostPort/hostNetwork exposed Ingress Controllers
Description
Add a --pool-member-type=daemonset mode which allows sending the traffic directly to the Ingress Controller, bypassing kube-proxy. The Ingress Controller can be exposing itself either using either hostPort (more secure sensible) or hostNetwork (less secure sensible).
As a reference Openshift does expose their IngressControllers in hostNetwork mode.
Actual Problem
At present when a CNI is not supported by CIS we fallback to nodePort mode.
In this scenario with BIG-IP + Ingress Controller we have the following drawbacks:
Solution Proposed
A more streamlined flow would be to just expose the Ingress controller with hostPort and send the traffic directly to the nodes where the ingress controllers are. This would be tuned using a --daemonset-label-selector, similar to our current --node-label-selector
This would require minimal code change, it would not require the existence of a Service definition and would have the following benefits:
Please note that when using daemonset it doesn't mean the workload PODs must be in all nodes, these can be chosen with nodeSelector.