Open kennedn opened 1 month ago
Hi @kennedn, if you set the resource limits for your install in a config
stanza in the Subscription for your VSO install, they should survive upgrades. Something like this:
spec:
config:
resources:
requests:
memory: "128Mi"
cpu: "10m"
limits:
memory: "512Mi"
cpu: "500m"
Hi @kennedn, if you set the resource limits for your install in a
config
stanza in the Subscription for your VSO install, they should survive upgrades. Something like this:spec: config: resources: requests: memory: "128Mi" cpu: "10m" limits: memory: "512Mi" cpu: "500m"
Hi,
We've solved our issue for the moment by utilizing the config section of the vault subscription object as described, we have increased the default memory limit from 256Mi to 512Mi. Thanks for the tip.
Is there any appetite to increase the default memory limit shipped with the operator / chart? I have noticed a few other instances of similar OOMkilled issues raised against this project to date so maybe warrants some thought.
Thanks,
Good to hear that helped the issue at least for now. We may want to increase the default limit, though we're still investigating why memory usage seems to spike for some users. Are there only VaultStaticSecrets's in use for your case? What auth methods are being used with these secrets? Are there any errors in the VSO logs? Are there any other differences in workload between the cluster with high memory usage and the others without?
Also v0.5.1 is fairly old at this point, so it would be interesting to see if there's any change in memory usage with a more recent version.
Describe the bug We are currently using Vault Secrets Operator in our clusters. We have a specific cluster that gets more customer volume than the others and have recently noticed that the
vault-secrets-operator-manager
pod is being OOM killed after reaching the memory limits outlined in the Operators CSV.Snippet from the
.status
key in the OOMkilled pods yaml:To Reproduce Steps to reproduce the behavior:
Application deployment: N/A
Expected behavior CSV for the operator has enough head room in its memory limits to avoid out of memory issues in pod.
Environment
Additional context We have been able to temporarily work around this issue by manually doubling the limits memory value for the manager container in the CSV (from 256Mi to 512Mi) at key
.spec.install.spec.deployments[].spec.template.spec.containers[]
.This is not a permanent fix though since re-installing / upgrading the operator will re-instate the original memory value. We are installing via the Operator Hub in Openshift so do not have a way to permanently affect this value.