Closed smlx closed 2 years ago
I'd like a review on the approach. In particular, setting a single limit on all pods deployed. Is that a good idea?
I'd like a review on the approach. In particular, setting a single limit on all pods deployed. Is that a good idea?
I think we can avoid adding code for this in the controller by way of #144 (this lets you just add envvars to the remote-controller and they get passed through to build pods automatically)
Then updating the PR in https://github.com/uselagoon/build-deploy-tool/pull/120 to use a LAGOON_FEATURE_FLAG_FORCE_CONTAINER_MEMORY_LIMIT
variable instead, don't even use the featureFlag
wrapper for it, just consume the variable directly if it is present?
That's an interesting approach I hadn't thought of. But does it become confusing if there is a variable LAGOON_FEATURE_FLAG_FORCE_CONTAINER_MEMORY_LIMIT
that doesn't behave like the other LAGOON_FEATURE_FLAG_*
variables?
Sure, then lets wrap it with the featureFlag
check and IO can just use FORCE
if needed.
As discussed, closing this in favour of https://github.com/uselagoon/lagoon-charts/pull/487
Checklist
This change is part of adding support for setting memory resource limits on containers deployed by Lagoon. The idea is that this is a limit which no container should ever reach. Hence the default value of
16Gi
, which is chosen as half the resources of a common node size today, of32Gi
.See https://github.com/uselagoon/build-deploy-tool/pull/120 for the associated build-deploy-tool PR.
Closing issues
n/a