nginxinc / kubernetes-ingress

NGINX and NGINX Plus Ingress Controllers for Kubernetes
https://docs.nginx.com/nginx-ingress-controller
Apache License 2.0
4.65k stars 1.96k forks source link

Running Multiple NGINX Ingress Controllers as Daemonset not working because of hostPort setting in Daemonset related resources. #1789

Open danilutkin opened 3 years ago

danilutkin commented 3 years ago

Running Multiple NGINX Ingress Controllers as Daemonset not working because of hostPort setting in Daemonset related resources.

To Reproduce Steps to reproduce the behavior:

  1. Setting up first-ingress-controller:
    helm upgrade -i -n infra first-controller nginx-stable/nginx-ingress --create-namespace \
    --set controller.kind=daemonset \
    --set controller.ingressClass=nginx \
    --set controller.service.type=NodePort \
    --set controller.service.httpPort.nodePort=30080 \
    --set controller.service.httpPort.nodePort=30443
  2. Setting up second-ingress-controller:
    helm upgrade -i -n infra second-controller nginx-stable/nginx-ingress --create-namespace \
    --set controller.kind=daemonset \
    --set controller.ingressClass=wss \
    --set controller.service.type=NodePort \
    --set controller.service.httpPort.nodePort=31080 \
    --set controller.service.httpPort.nodePort=31443 \
    --set controller.tolerations[0].operator=Exists
  3. Watching pods statuses of second-ingress-controller:
    # k get pods -n infra -l app=second-controller-nginx-ingress
    NAME                                    READY   STATUS    RESTARTS   AGE
    second-controller-nginx-ingress-6ngpk   0/1     Pending   0          4m26s
    second-controller-nginx-ingress-8nxxg   0/1     Pending   0          4m26s
    second-controller-nginx-ingress-982d2   0/1     Pending   0          4m26s
  4. We are looking for the reason why it's in Pending state:
    # k describe pod second-controller-nginx-ingress-6ngpk -n infra
    ...
    Events:
    Type     Reason            Age    From               Message
    ----     ------            ----   ----               -------
    Warning  FailedScheduling  4m15s  default-scheduler  0/8 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 7 node(s) didn't match Pod's node affinity/selector.

    Expected behavior for me Second-ingress-controller up and running.

My environment

Additional context The behavior we have it's because hostPort setting. https://kubernetes.io/docs/concepts/configuration/overview/

Don't specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a hostPort, it limits the number of places the Pod can be scheduled, because each <hostIP, hostPort, protocol> combination must be unique. If you don't specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol.

In daemon-set configs we have hostPort setting: https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/daemon-set/nginx-ingress.yaml https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/daemon-set/nginx-plus-ingress.yaml https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/helm-chart/templates/controller-daemonset.yaml

In deployment configs we don't have hostPort setting: https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/deployment/nginx-ingress.yaml https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/deployment/nginx-plus-ingress.yaml https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/helm-chart/templates/controller-deployment.yaml

Why we need to use hostPort in daemonset configurations? Can we just delete this setting like in deployment configurations? Maybe we should make template with ability to remove hostPort configuration by specifying some conditions?

Aha! Link: https://nginx.aha.io/features/IC-235

github-actions[bot] commented 3 years ago

Hi @danilutkin thanks for reporting!

Be sure to check out the docs while you wait for a human to take a look at this :slightly_smiling_face:

Cheers!

pleshakov commented 3 years ago

Hi @danilutkin

Originally we made an assumption that the ports of Ingress Controller daemonset pods would be exposed using port-mapping rather then a service, so that's why it is the default option for the daemonset. However, we didn't make it configurable :(

I think we can make the hostPorts configurable with parameters like below, where an empty value will disable port-mapping:

controller.httpPort.hostPort
controller.httpsPort.hostPort

I wonder if you have a workaround for now? Would running a deployment with affinity (controller.affinity) rules work so that you have one pod per node? https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity

danilutkin commented 3 years ago

Hi @pleshakov

I thought that I would choose a daemonset for the first controller and just a replicated deployment for the second controller. Or I would use deployment with replicas for both controllers. But it doesn't seem like a clear solution.

In my "NodePort" world i think just need to patch daemonset installation from helm with needed configs. For example if hostPort not configurable for now from values.yaml, I can using a json patch with positional arrays.

kubectl -n infra patch ds second-controller-nginx-ingress --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/ports/0/hostPort"}]'
kubectl -n infra patch ds second-controller-nginx-ingress --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/ports/1/hostPort"}]'

Any other my ideas require something more than one line patching, for example with creating some labels and selectors with logic.

Is it ok, if i make a PR with configurable hostPort settings like you say above?

pleshakov commented 3 years ago

@danilutkin

not sure if that would work for you, but another approach is to patch the helm chart sources and install the IC from the patched version of the Helm chart instead of our helm repo.

Is it ok, if i make a PR with configurable hostPort settings like you say above?

yep, we'd be happy to review! However, because of the vacations in our team, please expect some delays.

SuperCipher commented 2 years ago

Will something like this works? https://github.com/nginxinc/kubernetes-ingress/compare/development...SuperCipher:feature/helm-daemonset-hostport-configurable

danilutkin commented 2 years ago

Very near. What you think about changing:

.Values.hostPort.httpPort - > .Values.controller.hostPort.httpPort .Values.hostPort.httpsPort - > .Values.controller.hostPort.httpsPort

SuperCipher commented 2 years ago

@danilutkin ok fixed https://github.com/nginxinc/kubernetes-ingress/compare/development...SuperCipher:feature/helm-daemonset-hostport-configurable

Why is PR template said I should target master branch? Why not the development branch?🤔

danilutkin commented 2 years ago

@SuperCipher Hi and Thank you for your changes) Are you tested this changes? The main purpose of this changes from my opinion is to disable hostPort parameter for daemonset kind IC setup. For ability to setup multiple daemonset IC. If it tested i don't see the reason not to merge your branch to master.

khumps commented 2 years ago

Just found this thread after running into the same issue. I would really like to deploy the ingress across multiple nodes without conflict