grafana / loki

Like Prometheus, but for logs.
https://grafana.com/loki
GNU Affero General Public License v3.0
23.31k stars 3.37k forks source link

Grafana loki retention of logs is only 1h #6513

Open MaestroJurko opened 2 years ago

MaestroJurko commented 2 years ago

Describe the bug I can only query for logs of my app now-1h, other queries return a null result.

To Reproduce Steps to reproduce the behavior:

  1. Install Helm Chart bitnami/grafana-loki
  2. Query for logs of time range older then 1h

Expected behavior To still be able to query for logs of my app longer than 1h. I used the timerange of 6 hours and I do not see older logs than 1h.

Environment:

Helm Chart bitnami/grafana-loki and grafana loki.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-loki-custom
  namespace: monitoring
  labels:
    app.kubernetes.io/component: loki
    app.kubernetes.io/part-of: grafana-loki
    app.kubernetes.io/instance: grafana-loki
data:
  loki.yaml: |-
    auth_enabled: false

    server:
      http_listen_port: 3100

    distributor:
      ring:
        kvstore:
          store: memberlist

    memberlist:
      join_members:
        - grafana-loki-gossip-ring

    ingester:
      lifecycler:
        ring:
          kvstore:
            store: memberlist
          replication_factor: 1
      chunk_idle_period: 30m
      chunk_block_size: 262144
      chunk_encoding: snappy
      chunk_retain_period: 1m
      max_transfer_retries: 0
      wal:
        dir: /bitnami/grafana-loki/wal

    limits_config:
      enforce_metric_name: false
      reject_old_samples: true
      reject_old_samples_max_age: 168h
      max_cache_freshness_per_query: 10m
      split_queries_by_interval: 15m
      retention_period: 24h
      retention_stream:
        - selector: '{namespace="niftyswifty-app"}'
          priority: 1
          period: 24h

    schema_config:
      configs:
      - from: 2020-10-24
        store: boltdb-shipper
        object_store: filesystem
        schema: v11
        index:
          prefix: index_
          period: 24h

    storage_config:
      boltdb_shipper:
        shared_store: filesystem
        active_index_directory: /bitnami/grafana-loki/loki/index
        cache_location: /bitnami/grafana-loki/loki/cache
        cache_ttl: 168h
      filesystem:
        directory: /bitnami/grafana-loki/chunks
      index_queries_cache_config:
        memcached:
          batch_size: 100
          parallelism: 100
        memcached_client:
          consistent_hash: true
          addresses: dns+grafana-loki-memcachedindexqueries:11211
          service: http

    chunk_store_config:
      max_look_back_period: 0s
      chunk_cache_config:
        memcached:
          batch_size: 100
          parallelism: 100
        memcached_client:
          consistent_hash: true
          addresses: dns+grafana-loki-memcachedchunks:11211

    table_manager:
      retention_deletes_enabled: true
      retention_period: 672h

    query_range:
      align_queries_with_step: true
      max_retries: 5
      cache_results: true
      results_cache:
        cache:
          memcached_client:
            consistent_hash: true
            addresses: dns+grafana-loki-memcachedfrontend:11211
            max_idle_conns: 16
            timeout: 500ms
            update_interval: 1m

    frontend_worker:
      frontend_address: grafana-loki-query-frontend:9095

    frontend:
      log_queries_longer_than: 5s
      compress_responses: true
      tail_proxy_url: http://grafana-loki-querier:3100

    compactor:
      shared_store: filesystem

    ruler:
      storage:
        type: local
        local:
          directory: /bitnami/grafana-loki/conf/rules
      ring:
        kvstore:
          store: memberlist
      rule_path: /tmp/loki/scratch
      alertmanager_url: https://alertmanager.xx
      external_url: https://alertmanager.xx

---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: grafana-loki
  namespace: monitoring
spec:
  interval: 5m
  chart:
    spec:
      chart: grafana-loki
      version: "2.1.4"
      sourceRef:
        kind: HelmRepository
        name: bitnami
        namespace: flux-system
      interval: 1m
  values:
    loki:
      existingConfigmap: grafana-loki-custom
    tableManager:
      enabled: true
stale[bot] commented 2 years ago

Hi! This issue has been automatically marked as stale because it has not had any activity in the past 30 days.

We use a stalebot among other tools to help manage the state of issues in this project. A stalebot can be very useful in closing issues in a number of cases; the most common is closing issues or PRs where the original reporter has not responded.

Stalebots are also emotionless and cruel and can close issues which are still very relevant.

If this issue is important to you, please add a comment to keep it open. More importantly, please add a thumbs-up to the original issue entry.

We regularly sort for closed issues which have a stale label sorted by thumbs up.

We may also:

We are doing our best to respond, organize, and prioritize all issues but it can be a challenging task, our sincere apologies if you find yourself at the mercy of the stalebot.

Hukha commented 1 year ago

I'm having this same problem. Any idea what could be causing this?

korenlev commented 1 year ago

this is a documented limitation: https://github.com/bitnami/charts/tree/main/bitnami/grafana-loki/#limitation But this is huge upset in the default deployment ! I think there was a release that supported local. There are use cases where local filesystem has more then enough storage and is faster then object storage, so might be desirable. For what it's worth in my case this was a blocker for using loki

EvertonSA commented 11 months ago

@korenlev what if I'm using object storage and I only have 1 hour? could that be possible?

EvertonSA commented 11 months ago

nevermind, upgrading to new version of loki solved my problem. see here: https://github.com/grafana/loki/pull/10585