Open mohaabduvisa opened 1 month ago
@mohaabduvisa Whats happens, if you set the memory.limit on a compactor + using --enable-auto-gomemlimit
? (feature from 0.35)
I guess, without an limit, to go runtime uses what ever it gets.
Reducing those concurrency parameters could help.
What happened: We are planning to use thanos for long term storage and during the process we are facing few setbacks. As attached we are seeing 15GB RAM spike for thanos-compactor for 3.5lakh time series. We have a plan to implement compaction and downsampling for 8M time series which extrapolating would result in below figures
15GB RAM - 3.5 lakh samples 360GB RAM - 8M samples
360GB RAM is too much actually for short spikes. Below is the configuration we are using, despite setting concurrency arguments we are still seeing memory spikes. Please let us will this be fixed in future versions?
What you expected to happen: We expected RAM utilization to be way lesser
How to reproduce it (as minimally and precisely as possible): We are running two replicas of prometheus with thanos sidecar embedded to write to minio s3 object storage in the same cluster with the below configuration
Thanos, Prometheus and Golang version used: Thanos: 0.34.1 prometheus: 2.49.2 golang: 1.21