Closed k-wall closed 1 year ago
I've confirmed this is properly handling the suspended case. Suspending an instance results in the addition of two more blueprints.
The bootstrap one has:
haproxy.router.openshift.io/balance: leastconn
haproxy.router.openshift.io/rate-limit-connections: "true"
haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp: "5"
haproxy.router.openshift.io/rate-limit-connections.rate-tcp: "15"
and the one corresponding to the normal broker routes:
haproxy.router.openshift.io/rate-limit-connections: "true"
haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp: "5"
haproxy.router.openshift.io/rate-limit-connections.rate-tcp: "15"
Looking at memory, the ingress pods with the extra blueprints are now sitting at ~350Mi.
Looking at memory, the ingress pods with the extra blueprints are now sitting at ~350Mi.
Presumably the 500 setting is per pool, so I think we could safely drop that to 300 (50 instances 3 brokers 2 margin) now that we're subdividing things so much.
Looking at memory, the ingress pods with the extra blueprints are now sitting at ~350Mi.
Presumably the 500 setting is per pool, so I think we could safely drop that to 300 (50 instances 3 brokers 2 margin) now that we're subdividing things so much.
Yes, 500 is per pool. I'd be happy with 300 pool size. I'll do it. As we mentioned elsewhere there is provision to annotate the blueprint with a poolsize (after all 300 suspended instances = unlikely), but I vote we don't do that now. We can tune that later if we like.
Kudos, SonarCloud Quality Gate passed!
850 was incomplete as we failed to consider suspended routes, so instance suspension was still leading to a haproxy restart. This creates blueprints for those routes too.