Open davem-git opened 1 month ago
@davem-git which L3/L4 load balancer are you using ? have you setup health checks ?
I'm hosted in azure and gcp. Those test ended with the similar results on both clouds
you'll need to setup health checks on the load balancer so they can stop routing to the envoy that is shutting down (draining), the first step when shutting dowm is failing health checks so the LB can route newer connections to a newer envoy pod
/ready
on port 19001)How are these set?
any documentation? envoy gateway stands up these load balancers. I see I can set annotations, but I don't see any annotations for health checks for gcp.
I do see some health check info for
hey @davem-git we just merged https://github.com/envoyproxy/gateway/pull/4021 that should help you with graceful shutdown on GKE with the default settings (no settings), you can try it out by using the v0.0.0-latest
tag of. the helm chart
hey @davem-git did you get a chance to try this out ?
I've looked around and haven't found those settings for all of our cloud providers. I will test it out on azure today. So far though adding those settings I listed above seemed to have helped from my testing
@davem-git with #4021 (which is now available with v0.0.0-latest
) you may not need explicit health checks because we've reduced the time to detect failure for the envoy endpoints
hey @arkodg , just a basic doubt
is there no need for adding pod readiness gates on namespace where envoy proxies are running(proxies receiving traffic via nlb or alb) like for eg. on eks with aws-load-balancer-controller running, ref: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/deploy/pod_readiness_gate/
I tried testing this, The health checks failed to work. Envoy-proxy didn't function. I reverted
hey @arkodg , just a basic doubt
is there no need for adding pod readiness gates on namespace where envoy proxies are running(proxies receiving traffic via nlb or alb) like for eg. on eks with aws-load-balancer-controller running, ref: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/deploy/pod_readiness_gate/
@ncsham afaik the controller is reading the endpoint slices of the service from the API server and will detect any endpoints that are down (whose readinessProbe
has failed) and shouldnt route to them
Description: I'm working on implementing envoy-gateway as a replacement for our nginx controller. I have some basic tests, a pod that returns a json block when hit an endpoint. Using K6 as a testing sweet. I set up the following test.
When I run this test and start a rollout restart of the envoy pods. I get the following errors
When I do this on nginx I do not get these errors.
I added these to my custom proxy config and it seemed to fix the issue ``sh shutdown: drainTimeout: 600s minDrainDuration: 60s