Closed yawboateng closed 7 years ago
@yawboateng using a daemonSet is ok if your cluster is "small" (In a cluster with 50 nodes it makes no sense). What you should try to avoid is to locate multiple pods of the ingress controller in the same node.
Closing. Please reopen if you have more questions
thanks @aledbf, will you consider 15 - 20 nodes a small enough for daemonSet?
@yawboateng I think more than 10 nodes is too much. That said this number depends on the load you need to handle.
@yawboateng using a daemonSet is ok if your cluster is "small" (In a cluster with 50 nodes it makes no sense). What you should try to avoid is to locate multiple pods of the ingress controller in the same node.
can you please explain why?
I also wonder why I should avoid is to locate multiple ingress-nginx
pods in the same node.
In my case, I isolated k8s node groups for ingress controllers using taint and affinity. And I wonder which is better one of the following designs for production.
Deployment
: n pods in m ingress nodes (n >= m)Daemonset
: n pods in n ingress nodesPlease give me some advice.
@jevgenij-alterman @posquit0 the reason is simple: you don't need a high number of NGINX instances to handle high volumes of traffic and most importantly, you need to keep in mind that each instance of the ingress controller needs to reach the kubernetes API server. This means if you have lots of replicas, you are putting (unnecessary) pressure. Using a deployment with an anti-affinity rule to avoid multiple replicas in the same node is, in most of the cases, more than enough.
@aledbf i am trying to understand why i should not have more than ONE ingress pod on same node
@yazbekhe Its just a matter of reliability, if you have 2 nginx replicas but they are in the same node if the node fails both replicas will go down and possibly stop traffic
Am using a deployment with a minimum of 3 replicas and horizontal pod autoscaler enabled with a max of 20 pods and targetCPUUtilizationPercentage 90.
From your experiences is this set up recommended to use with the nginx controller? or should i be using a daemonSet and deploying 1 pod per worker node? or a deployment with a single pod for the entire cluster?