grafana / helm-charts

Apache License 2.0
1.62k stars 2.25k forks source link

Loki Distributed Querier Pods High Working Set Memory #776

Open pwlawe opened 2 years ago

pwlawe commented 2 years ago

In our loki-distributed deployment, we have three querier pods. For the last three days at approximately 1:30am EDT, our deployments quickly saturated their working set memory and entered a non-responsive state. RSS memory remained low, and the pods were not OOM killed. The system recovered after manually killing these pods. Memory utilization by one such pod can be seen below, this pattern repeated itself on each of the 3 pods. Screen Shot 2021-11-05 at 11 32 00 AM

kayketeixeira commented 2 years ago

Hello @pwlawe, we have the same problem, but we realized that with us this only happens when we perform a search without any filter. When we put a filter the load distribution is more assertive.

Have you already managed to solve this problem?