Open naguaramaster opened 5 days ago
Are you using a native pod networking cluster? Have you tried the pod readiness gates feature?
Yes, Im using a native ingress controller for OKE, thank you. I will read to apply and I will tell you
I did the test and added the "readinessGates" parameter according to the indicated documentation, but when performing the deployment, the rollout was stuck with this status "Waiting for deployment "vista-crm-front-dp" rollout to finish: 1 out of 3 new replicas have been updated... error: deploy "vista-crm-front-dp" exceeded its progress deadline" and created a new pod of the application" it did not delete the old ones, I am working with 3 replicas.
Any comments on this?
I have Managed Nodes configured, according to documentation, it is indicated not to use ReadinessGates for these types of nodes.
What would be the solution in this case so that the Load Balancer remains online in a deployment?
We have managed to place 2 applications with 3 replicas using OKE, but we have noticed that when performing an update of the application, while kubernetes performs it step by step, without all the pods being unavailable, at a given time when the IPs of the pods are updated in the Load Balancer, it does not do it in the same way, allowing downtime (bad gateway). The advantage we gain from deploying without downtime with kubernetes is being lost when the Load Balancer updates, it seems much slower to do so. What could be done to avoid this behavior?