netscaler / netscaler-k8s-ingress-controller

NetScaler Ingress Controller for Kubernetes:
https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/
311 stars 91 forks source link

Allow avoiding cluster-internal rerouting for LoadBalancer services by only setting hostname in status field #641

Open simon-wessel opened 8 months ago

simon-wessel commented 8 months ago

Is your feature request related to a problem? Please describe. Kube proxy creates "shortcuts" for external IPs of LoadBalancer services. When a pod in the cluster connects to a IP that is set as the external IP of a LoadBalancer service, the traffic will not leave the cluster and instead be directly routed to that service. Therefore the Netscaler ADC is not part of the traffic and any rules/settings that are configured will be "bypassed".

Describe the solution you'd like There are long discussions over at kubernetes (here and here) if this behaviour is intransparent for the user and poses problems or risks for those who want to use features of the load balancer (firewalls/logging/auth/...).

I would like to kindly request the option to change the default behavior when needed. Other ingress controllers have by now implemented a workaround to not set .status.loadBalancer.ingress[].ip, but instead only .status.loadbalancer.ingress[].hostname. This could be configured using an annotation.

As far as I know it is currently not possible for NetsScaler LoadBalancer services to not have the .status.loadBalancer.ingress[].ip field set after provisioning. Please correct me if I am wrong.

Describe alternatives you've considered The topic has gained enough traction that a KEP has been introduced and there is a new feature in alpha state in Kubernetes 1.29. However many users are not yet using that version or may not want to use the feature in an Alpha state. Also even if the Kubernetes Alpha feature is enabled, the ingress controller still needs to set the .status.loadBalancer.ingress[].ipMode field. The support for this field could also be implemented while working on this issue.

Additional context Steps to reproduce:

  1. Create LoadBalancer service as described here.
  2. Create a pod in the same cluster and send traffic to the external address of the LoadBalancer service.
  3. Monitor the Netscaler ADC to see that requests do not hit the ADC.
simon-wessel commented 8 months ago

Correction: According to the comments in the kubernetes tickets the traffic will stay in the cluster no matter if IPVS or iptables is used.

If have updated the title and issue description.

simon-wessel commented 8 months ago

Update: Even with the 1.29 Alpha feature the Ingress controller would need to support setting the new ipMode in the status. This is described in this blog article.

@arijitr-citrix I see you assigned yourself the issue. Do you see this implemented in the foreseeable future?

arijitr-citrix commented 8 months ago

Hi @simon-wessel As we are stacked into current commits, we need more information to pick this task. I request you to fill out Requirement Gathering Questionnaire. We can then check based on the urgency.

simon-wessel commented 7 months ago

Hi @arijitr-citrix I have filled out the Questionnaire as requested.

lukasboettcher commented 1 month ago

Attached is a patch for triton bundled with the quay.io/netscaler/netscaler-k8s-ingress-controller:2.1.4 image. @arijitr-citrix please have a look.

lb_status_patch.txt

This patch allows a user to use a hostname instead of an ip in the LoadBalancerIngress with the service.citrix.com/loadbalancer-force-hostname annotation or set the ipMode with the service.citrix.com/loadbalancer-ip-mode annotation on a service of type LoadBalancer. Both of which would fix this cluster-internal routing issue.

For anyone interested, you can patch the /usr/src/triton/kubernetes/kubernetes.py file in the aforementioned image to get this functionality.