projectcontour / contour

Contour is a Kubernetes ingress controller using Envoy proxy.
https://projectcontour.io
Apache License 2.0
3.73k stars 680 forks source link

Contour leader doesn't update endpoints in xDS cache after upstream pods recreation #6743

Open philimonoff opened 1 week ago

philimonoff commented 1 week ago

What steps did you take and what happened:

  1. There is something about 8000 httpproxy objects with same ingressclass.
  2. There are 2 contour pods (leader and replica) and four envoy pods (daemonset).
  3. We recreate pods of application, that are upstreams of corresponding envoy cluster.
  4. After recreating of these pods, contour-replica has ip addresses of new pods as endpoints for this envoy cluster in eDS (via contour cli).
  5. Contour-leader has ip addresses of old (deleted) pods as endpoints for this envoy cluster in eDS (via contour cli).
  6. Envoy pods connected to contour leader pod return 503 error for requests to corresponding hosts.
  7. Envoy pods connected to contour replica pod serves requests correctly.
  8. Recreating of contour pods fixes this problem for a while.

What did you expect to happen:

Leader pod updates it's state after app's pod recreation.

Anything else you would like to add:

-

Environment:

github-actions[bot] commented 1 week ago

Hey @philimonoff! Thanks for opening your first issue. We appreciate your contribution and welcome you to our community! We are glad to have you here and to have your input on Contour. You can also join us on our mailing list and in our channel in the Kubernetes Slack Workspace

tsaarni commented 1 week ago

Hi @philimonoff, I haven’t tried to reproduce this yet, but I wanted to ask - does the issue depend on having a large number of HTTPProxies, or have you observed it occurring with fewer (or even a single) HTTPProxy as well?

philimonoff commented 1 week ago

@tsaarni thank you for your fast reaction. We don't see this on small installations. I can't say which number of proxies is the exact trigger bound, but this situation occurs sometimes on the installation with 5000 proxies. But if with 5000 it occurs sometimes, with 8000 and more it's a strong pattern.

tsaarni commented 1 week ago

@philimonoff Could this be due to rate limiting? The API server client library has limits on requests, which can result in significant delays if a large number of resources change simultaneously. You could try adjusting these parameters in the Contour deployment for the contour serve command to see if it helps: --kubernetes-client-qps=<qps> and --kubernetes-client-burst=<burst>. Use large values such as 100 or larger, to observe any difference. For details, check out this article.

philimonoff commented 1 week ago

@tsaarni I tried 100 qps and 150 burst, and it didn't help. More than that, all hosts started return 503, so I removed these flags. I don't know whats happened, because it's a production environment and I can't let it stay broken.

tsaarni commented 1 week ago

@philimonoff Unfortunately at this point, I don’t have any other ideas what could cause the issue. I assume you've already checked for any errors in the leader’s logs? It’s possible the Contour pod might be under heavy resource constraints (like CPU), but if that were the case, I’d expect it to impact contour cli responses as well, which didn't seem to be the case.

philimonoff commented 1 week ago

@tsaarni before opening this issue, I had already tried to read contour debug logs (they have very intensive rate), record pprof sessions and traces (nothing suspicious), watch all metrics contour has. My next idea is to add my own logs on any step of the way of eventslice from api-server to xds cache. Now I also can't even imagine what is the cause of it.