grafana / loki

Like Prometheus, but for logs.
https://grafana.com/loki
GNU Affero General Public License v3.0
23.8k stars 3.43k forks source link

Loki prints PPPPPPPPPPP ... in logs #7866

Closed rufreakde closed 1 year ago

rufreakde commented 1 year ago

Describe the bug It seems the logs are printing weird information?

To Reproduce Steps to reproduce the behavior:

  1. Started Loki - Chart 3.6.0
    chart: loki
    version: 3.6.0
    repo: https://grafana.github.io/helm-charts
  2. Started Promtail (SHA or version) to tail '...'
    chart: promtail
    repo: https://grafana.github.io/helm-charts
    version: 6.4.0
  3. Query: {} term No query all works just seeing this weird prints
    │ Querying loki for logs with query: http://loki-gateway.monitoring.svc.cluster.local./loki/api/v1/query_range?start=1670333346499312452&end=1670333366499312452&query=%7Bstream%3D%22stdout%22%2Cpod%3D%22loki-canary-2q │
    │ vps%22%7D&limit=1000                                                                                                                                                                                                       │
    │ Querying loki for logs with query: http://loki-gateway.monitoring.svc.cluster.local./loki/api/v1/query_range?start=1670334247498591181&end=1670334267498591181&query=%7Bstream%3D%22stdout%22%2Cpod%3D%22loki-canary-2q │
    │ vps%22%7D&limit=1000                                                                                                                                                                                                       │
    │ Querying loki for logs with query: http://loki-gateway.monitoring.svc.cluster.local./loki/api/v1/query_range?start=1670335147499375635&end=1670335167499375635&query=%7Bstream%3D%22stdout%22%2Cpod%3D%22loki-canary-2q │
    │ vps%22%7D&limit=1000                                                                                                                                                                                                       │
    │ 1670335332498405397 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335333498743871 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335334499178360 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335335498406484 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335336498457965 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335337499231652 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335338498795970 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335339498707385 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335340498451382 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335341498691094 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335342498634106 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335343498858525 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335344498924492 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335345499205720 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335346499323373 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335347498751555 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp                                                                                                                        │
    │ 1670335348499237270 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp

    Expected behavior No PPPP prints

Environment:

Screenshots, Promtail config, or terminal output

Screenshot 2022-12-06 at 15 04 08

Deployment looks good.

# values.yaml
loki:
  storage:
    bucketNames:
      chunks: chunks
      ruler: ruler
      admin: admin
    type: s3
    s3:
      s3: s3://MINIO_USER:MINIO_PW@minio1-hl.monitoring-db.svc.cluster.local.:9000/chunks
      s3ForcePathStyle: true
      insecure: true
  enterprise:
    enabled: false
  revisionHistoryLimit: 3
  auth_enabled: false
  commonConfig:
    path_prefix: /var/loki
    replication_factor: 1

  # https://grafana.com/docs/loki/latest/operations/storage/retention/#grafana-loki-storage-retention
  compactor:
    retention_enabled: true
    shared_store: s3
    working_directory: /tmp/loki/compactor
    retention_delete_delay: 2h
    retention_delete_worker_count: 150

  common:
    storage:
      type: s3
      s3:
        s3: s3://MINIO_USER:MINIO_PW@minio1-hl.monitoring-db.svc.cluster.local.:9000/chunks
        s3ForcePathStyle: true
        insecure: true
  ruler:
    wal:
      dir: /loki/ruler-wal
    storage:
      type: s3
      s3:
        s3: s3://MINIO_USER:MINIO_PW@minio1-hl.monitoring-db.svc.cluster.local.:9000/ruler
        s3ForcePathStyle: true
        insecure: true

  storage_config:
    boltdb_shipper:
      shared_store: s3
    aws:
      s3: s3://MINIO_USER:MINIO_PW@minio1-hl.monitoring-db.svc.cluster.local.:9000/chunks
      s3forcepathstyle: true
      bucketnames: chunks

  schema_config:
    configs:
      - from: 2020-09-07
        store: boltdb-shipper
        object_store: aws
        schema: v11
        index:
          prefix: loki_index_
          period: 24h

  # https://grafana.com/docs/loki/latest/operations/storage/retention/
  limits_config:
    enforce_metric_name: false
    reject_old_samples: true
    reject_old_samples_max_age: 168h
    split_queries_by_interval: 10m
    max_cache_freshness_per_query: 10m
    retention_period: 720h
    retention_stream:
    - selector: '{namespace="argocd"}'
      priority: 1
      period: 48h
    - selector: '{namespace="default"}'
      priority: 1
      period: 48h
    - selector: '{namespace="kube-node-lease"}'
      priority: 1
      period: 48h
    - selector: '{namespace="kube-public"}'
      priority: 1
      period: 48h
    - selector: '{namespace="kube-system"}'
      priority: 1
      period: 48h

  analytics:
    reporting_enabled: false

monitoring:
  rules:
    enabled: true
    alerting: true
    namespace: monitoring
    annotations: {}
    labels:
      matchLabels:
        prometheus: k8s
        role: alert-rules
    additionalGroups:
    - name: additional-loki-rules
      rules:
        - record: job:loki_request_duration_seconds_bucket:sum_rate
          expr: sum(rate(loki_request_duration_seconds_bucket[1m])) by (le, job)
        - record: job_route:loki_request_duration_seconds_bucket:sum_rate
          expr: sum(rate(loki_request_duration_seconds_bucket[1m])) by (le, job, route)
        - record: node_namespace_pod_container:container_cpu_usage_seconds_total:sum_rate
          expr: sum(rate(container_cpu_usage_seconds_total[1m])) by (node, namespace, pod, container)

  alerts:
    enabled: true
    namespace: monitoring
    annotations: {}
    labels: {}
  serviceMonitor:
    enabled: true
    namespace: monitoring
    namespaceSelector: {}
    annotations: {}
    labels: {}
    interval: null
    scrapeTimeout: null
    relabelings: []
    scheme: http
    tlsConfig: null

read:
  replicas: 2
  autoscaling:
    enabled: true
    size: 5Gi

write:
  replicas: 2
  persistence:
    size: 5Gi
DylanGuedes commented 1 year ago

Hey, thanks for reporting this.

The behavior of printing only P is by design so I'm closing this. If you'd like to request a feature related to it feel free to open a new issue.

rufreakde commented 1 year ago

Thanks so its clear for everyone who googles it as well! Thanks for clarifying!

pandar00 commented 4 weeks ago

See this for more info https://grafana.com/docs/loki/latest/operations/loki-canary/#loki-canary

Loki Canary is a standalone app that audits the log-capturing performance of a Grafana Loki cluster. Loki Canary generates artificial log lines. ....The contents look something like this: 1557935669096040040 ppppppppppppppppppppppppppppppppppppppppppppppppppppppppppp