Closed unleashed closed 1 year ago
This is an example of the memory usage by backend-worker pods, how memory grows to infinite:
Our memory limit is 1GB, because of arriving to the memory limit (we have alerts when 90% of a mem/cpu limit is reached), we need to recycle the backend-worker pods every 3-4 days.
Do you need anything else? @unleashed
AFAIK https://github.com/3scale/apisonator/issues/303 shouldnt be a blocker anymore, because legacy infra is already decommissioned.
@slopezz yes, have you seen the same behavior in backend-listener pods?
Everything is OK on backend-listener side @unleashed
This is the current memory usage of backend-listener pods within last 24h, not memory increases:
The details of one example backend-listener pod during last 24h (there are 4 containers, backend-listener
the main one + envoy-sidecar + envoy-shutdown-manager + envouy-init-manager):
You can see, main container backend-listener
(green) remains constant around 300MB with no changes, with a limit of 700MB
It looks like when enabling async Apisonator leaks memory. This might be fixed in upstream dependencies of the async reactor, but we are blocked on #303.
@3scale/operations can you provide more data on this? Grafana dashboard screen captures would be nice to have.