We're seeing the memory usage for the container with the exporter growing non-stop over time. It grows until an OOM error arises and Kubernetes restarts the container.
This is a look at the memory usage over 7 days:
I'm unsure if this is related, but our current Prometheus configuration is as follows:
10s scrape interval.
10s timeout.
I'll share with you how our Grafana dashboard metrics for the said week:
Due to https://github.com/danihodovic/celery-exporter/issues/157 we switched from a Deployment to a StatefulSet, and it improved greatly our Prometheus queries latency, however, I worry about OOM errors, but I'm unsure what configuration may be the cause of it.
Thanks in advance!
Edit: Just for clarification, we're using the official Docker image without any modifications, the only parameter is the Redis Broker Queue URL.
Hello!
First of all, thank you for this amazing project.
We're seeing the memory usage for the container with the exporter growing non-stop over time. It grows until an
OOM
error arises and Kubernetes restarts the container.This is a look at the memory usage over 7 days:
I'm unsure if this is related, but our current Prometheus configuration is as follows:
I'll share with you how our Grafana dashboard metrics for the said week:
Due to https://github.com/danihodovic/celery-exporter/issues/157 we switched from a Deployment to a StatefulSet, and it improved greatly our Prometheus queries latency, however, I worry about OOM errors, but I'm unsure what configuration may be the cause of it.
Thanks in advance!
Edit: Just for clarification, we're using the official Docker image without any modifications, the only parameter is the Redis Broker Queue URL.