Open ercansayici opened 1 year ago
me too. Mine is also like this, but I currently think that it is caused by the delay in sending logs from my promtail agent, and it is not a problem with the loki server.
So the data will level out after 5 minutes.
Looking forward to more discussions, it is also important to be able to identify this issue. Currently ourrecording rule
will be affected by this, resulting in inaccurate data.
hi @liguozhong My Loki is Distributed mode that deployed using bitnami helmchart, and as a agent I deployed Fluent-bit. what is your environment? Regards
Distributed mode + promtail / vector / go http sdk / java http sdk .
This issue is hard to track down, I noticed it 9 months ago, but I can't figure it out
I think I'm seeing a very similar issue. Only noticed it over switching out promtail for vector when shipping logs to loki, but it's possible the issue was there before and I just didn't notice it since I wasn't looking as closely.
And like others said it seems to happen to pods/apps generating high (100s/1000s of lines/sec) of logs, but does not impact other apps.
When I run a query on Grafana to see some logs from my app, even though the logs are gathered from Loki and can be seen in the logs section, some of the logs are not present on the Logs Volume Chart (there is a gap). and after some time passed this gap disappears. (please see the screenshots below, I took screenshots at 1-hour intervals)
Notes:
Environment:
Grafana version: 9.4.0 Data source type & version: Loki 2.7.1 Distributed deployment (use Memcached) OS Grafana is installed on: Linux User OS & Browser: Windows, Mac