grafana / loki

Like Prometheus, but for logs.
https://grafana.com/loki
GNU Affero General Public License v3.0
23.8k stars 3.43k forks source link

"tailer dropped streams is reset" log spew when tailing with json parser #8635

Open jtackaberry opened 1 year ago

jtackaberry commented 1 year ago

Describe the bug

Running this works fine:

$ logcli query '{component="foobar"}' -f  

However, running this:

$ logcli query '{component="foobar"} | json' -f  

results in dozens of log lines per second from Loki that look like:

level=info ts=2023-02-25T02:07:22.644451828Z caller=tailer.go:230 msg="tailer dropped streams is reset" length=10

On the client side, the tail seems to work fine. It's just a constant stream of log lines on the server side.

To Reproduce Steps to reproduce the behavior:

  1. Start Loki v2.7.4 (also saw on 2.7.3) in single binary mode shipping to S3 (these details may not be relevant but that's my environment)
  2. Run logcli selecting a valid stream for the environment but include a json parser in the LogQL query
  3. Check Loki logs

Expected behavior

Expected similar log output to queries without json parser.

Environment:

Test environment has:

Screenshots, Promtail config, or terminal output

Here's Loki's full config:

auth_enabled: false

server:
  http_listen_port: 3100
  grpc_listen_port: 9096
  http_server_read_timeout: 300s
  http_server_write_timeout: 300s

common:
  path_prefix: /tmp/loki
  storage:
    s3:
      bucketnames: somes3bucketnamegoeshere
      region: us-east-1
      s3forcepathstyle: true
  replication_factor: 1
  ring:
    instance_addr: 127.0.0.1
    kvstore:
      store: inmemory

query_range:
  results_cache:
    cache:
      embedded_cache:
        enabled: true
        max_size_mb: 100

limits_config:
  ingestion_rate_mb: 240
  ingestion_burst_size_mb: 480
  max_query_length: 45d
  reject_old_samples: false

schema_config:
  configs:
    - from: 2020-10-24
      store: boltdb-shipper
      #object_store: filesystem
      object_store: s3
      schema: v12
      index:
        prefix: index_
        period: 24h

compactor:
  working_directory: /tmp/loki/compactor
  shared_store: s3

ruler:
  alertmanager_url: http://localhost:9093
rea1shane commented 3 months ago

Some logs are lost when tailing with a json parser. Logs will not be lost after removing the json parser.

These logs seem to be dropped, why are they dropped?

It will drop the stream if the tailer is blocked or the queue is full.

I found this in the code, how to avoid dropping logs?