Closed dasrecht closed 3 years ago
I don't particularly mind if we add the flag, but you loose HA - so any hickup manifests in provisioning failure
I think this would be fine even - it's more problematic when you get suddenly 200 containers scheduled on certain clusters :)
Out of interest - how does that happen? The pods are only temporary and renewals are statistically spread. I don't see many other options then annotating 100 Routes at once (or migrating from the old controller), in which case it doesn't matter much if it's 100 or 200 pods, you get O(N), and since the pods are only temporary the ones that won't get scheduled right away will shortly get their turn as the used ones get deleted.
Still working on figuring this out - I feel like it could have been a not 100% clean upgrade path on our end. In the process we also discovered a sizable amount of domains that don't point to the ingress which lead to ~60-80 routes that tried to get a LE Certificate but obviously failed due to the DNS pointing somewhere else.
On large clusters with hundreds of routes this is certainly a problem. The customer didn't know that lots of those routes aren't valid anymore (non-existing DNS etc.) but the v2 controller still scheduled 470 (!) pods and completely filled a worker node or two for routes that will never get validated anyway. That lead to the customer having to manually cleanup lots of broken routes.
Don't get me wrong, the issue was there before the upgrade, but it just went way more visible since it filled whole compute nodes. In v1 of the controller, the broken routes were just logged, so nobody really noticed.
HA in this case is less of an issue. If the validation fails coincidentally as a node fails to respond, it will get retried later after pod startup/node drain. But using double the compute resources for a "less probable" node failure is very costly.
Yes, exposer pods are there temporarily, but only in the happy path. Otherwise they stay forever, then 200+ pods can be a big deal.
Please also see #134 , where we get cryptic errors like failed with : can't create cert order: context deadline exceeded
for no apparent reason.
I don't particularly mind if we add the flag, but you loose HA - so any hickup manifests in provisioning failure
I don't get the point of having HA for configuring a route.
Apart from what @ccremer explained, we're also talking about a lot of potentially smaller customers on OpenShift clusters that are typically cost sensitive. It's difficult to explain to them why there are now higher requirements on resources (why they need to upgrade, why they must pay more) to be able to do the same thing as before upgrading the ACME controller.
Just for configuring routes. Seriously. – The exposer should probably have absolute minimum requirements by default.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen
.
Mark the issue as fresh by commenting /remove-lifecycle rotten
.
Exclude this issue from closing again by commenting /lifecycle frozen
.
/close
@openshift-bot: Closing this issue.
What would you like to be added: Currently the exposer replicaset is set to 2 replicas in code and not configurable: https://github.com/tnozicka/openshift-acme/blob/master/pkg/controller/route/route.go#L828
Why is this needed: Under certain circumstances this can lead to a lot of pods being started and filling compute nodes with long running exposer pods. Having it set to 1 replica would lessen the impact of long running exposer pods on bigger clusters.
@tnozicka