Open ahsifer opened 3 weeks ago
I had the same issue. I started the loki yesterday in a Google instance with 4G of memory. Today, my pod was crushed because it ran out of memory. There are approximately 800,000 logs in my instance today.
Attempting to add the following configuration to limit_config:
max_query_bytes_read: 67108864 # 64MB
This is my loki-config.yaml without max_query_bytes_read
auth_enabled: false
common:
path_prefix: /loki # Specifies a base directory for Loki's storage files
server:
http_listen_port: 3100
ingester:
lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1
chunk_idle_period: 5m
chunk_retain_period: 30s
max_chunk_age: 1h
chunk_target_size: 1048576 # 1MB
schema_config:
configs:
- from: 2024-10-24
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
storage_config:
tsdb_shipper:
active_index_directory: /loki/tsdb/index # Index directory for tsdb
cache_location: /loki/tsdb/cache # Cache directory for tsdb
filesystem:
directory: /loki/chunks # Chunk storage directory
compactor:
working_directory: /loki/compactor
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
allow_structured_metadata: true
volume_enabled: true
Hi, I have a Kubernetes cluster with 1 master and 9 workers, each node has 4 CPU cores and 4GB RAM. I am facing an issue that when the query frontend executes a query with a very large interval or with a large limit like 400000 logs, the loki-query-frontend pod consumes a huge amount of memory and crushes when the limit is reached. Are there any suggestions to overcome this issue? I think that is related to the behavior of the query frontend where it merges the responses returned from split queries in the memory (please correct me if I am wrong). I have read the following parameter and I think it might be useful. I am considering setting this parameter based on the available memory for the pod.
in the helm chart
my current helm chart deployment