grafana / loki

Like Prometheus, but for logs.
https://grafana.com/loki
GNU Affero General Public License v3.0
23.96k stars 3.46k forks source link

Ingester Clients Error when upgrading to v3.2.1 #14789

Open L7RM opened 1 week ago

L7RM commented 1 week ago

//> Description:

When upgrading to Loki 3.2.1 I am getting this error message: level=error ts=2024-11-06T13:00:09.7151177Z caller=ratestore.go:109 msg="error getting ingester clients" err="empty ring"

Unsure if this is a bug, but config worked in v3.0.1

To Reproduce Steps to reproduce the behavior:

  1. Start Loki (v3.2.1)
  2. Service Running successfully
  3. No logs being ingested due to error message

Expected behavior Logs to be ingested

Environment:

Screenshots, Promtail config, or terminal output

Config:

`auth_enabled: false

server: http_listen_port: 3100 grpc_listen_port: 9096 log_level: error grpc_server_max_concurrent_streams: 1000

common: instance_addr: 127.0.0.1 path_prefix: C:\Program Files\Loki storage: filesystem: chunks_directory: G:\Loki\chunks rules_directory: G:\Loki\rules replication_factor: 1 ring: kvstore: store: inmemory

query_range: results_cache: cache: embedded_cache: enabled: true max_size_mb: 100

schema_config: configs:

Old TSDB schema below

- from: 2024-02-28
  index:
    period: 24h
    prefix: index/
  object_store: filesystem
  schema: v12
  store: tsdb

Update TSDB schema below

- from: 2024-10-23
  index:
    period: 24h
    prefix: index/
  object_store: filesystem
  schema: v13
  store: tsdb

storage_config: tsdb_shipper: active_index_directory: G:\Loki\tsdb-index cache_location: G:\Loki\tsdb-cache filesystem: directory: G:\Loki\chunks

limits_config:

enforce_metric_name: false

retention_period: 744h max_global_streams_per_user: 20000 allow_structured_metadata: false

compactor: working_directory: C:\Program Files\Loki\compactor retention_enabled: true delete_request_store: filesystem delete_request_store_key_prefix: index/ retention_delete_delay: 2h

query_scheduler:

the TSDB index dispatches many more, but each individually smaller, requests.

We increase the pending request queue sizes to compensate.

max_outstanding_requests_per_tenant: 32768`

querier:

Each querier component process runs a number of parallel workers to process queries simultaneously.

You may want to adjust this up or down depending on your resource usage

(more available cpu and memory can tolerate higher values and vice versa)`

max_concurrent: 16 engine: max_look_back_period: 744h`