Hi, I am running Loki using Simple Scalable mode using docker compose. I have 3 read and 3 writes running and using nginx inside docker compose to distribute the traffic.
My question is when we set the limit value to 1k logs the queries are fast, but as soon as we increase the value to more than 1k the queries take a very long time and sometimes times out.
I have tried modifying the below configs but none of them seemed to improve the performance:
chunk_encoding: snappy
max_concurrent: 8
tsdb_max_query_parallelism: 1000
split_queries_by_interval: 1hr
Is there anything we can do to improve the read performance of Grafana/Loki logs? Any help will be appreciated. Thank you.
Hi, I am running Loki using Simple Scalable mode using docker compose. I have 3 read and 3 writes running and using nginx inside docker compose to distribute the traffic.
My question is when we set the limit value to 1k logs the queries are fast, but as soon as we increase the value to more than 1k the queries take a very long time and sometimes times out.
I have tried modifying the below configs but none of them seemed to improve the performance:
Is there anything we can do to improve the read performance of Grafana/Loki logs? Any help will be appreciated. Thank you.
Grafana OSS Version: { “commit”: “252761264e22ece57204b327f9130d3b44592c01”, “database”: “ok”, “version”: “10.3.3” }
Loki Configs:
Docker compose