Open vpetrushin opened 1 year ago
I think the problem may be with this line in your compactor config:
shared_store: aws
The documentation does not mention aws as a value for this parameter:
# The shared store used for storing boltdb files. Supported types: gcs, s3,
# azure, swift, filesystem, bos.
Either change it to s3
or remove, so it defaults to what you have in common.storage
.
the same. compactor doesn't work with follow config :
compactor:
working_directory: /loki/compactor
shared_store: filesystem
compaction_interval: 10m
retention_enabled: true
retention_delete_delay: 2h
retention_delete_worker_count: 150
I am facing same issue. I am using minio as storage and I am passing s3 in "delete_request_store" parameter. It should be set to configure the store for delete requests.
I have configured everything as per this document: https://grafana.com/docs/loki/latest/operations/storage/retention/
Below is my configuration:
config.yaml: |
auth_enabled: false
common:
compactor_address: 'http://loki-backend:3100'
path_prefix: /var/loki
replication_factor: 3
storage:
s3:
access_key_id: enterprise-logs
bucketnames: chunks
endpoint: loki-minio.monitoring.svc:9000
insecure: true
s3forcepathstyle: true
secret_access_key: supersecret
compactor:
compaction_interval: 10m
delete_request_store: s3
retention_delete_delay: 5m
retention_delete_worker_count: 10
retention_enabled: true
working_directory: /data/retention
frontend:
scheduler_address: ""
tail_proxy_url: http://loki-querier.monitoring.svc.cluster.local:3100
frontend_worker:
scheduler_address: ""
index_gateway:
mode: simple
ingester:
chunk_encoding: snappy
limits_config:
max_cache_freshness_per_query: 10m
query_timeout: 300s
reject_old_samples: true
reject_old_samples_max_age: 168h
split_queries_by_interval: 15m
volume_enabled: true
memberlist:
join_members:
- loki-memberlist
pattern_ingester:
enabled: false
querier:
max_concurrent: 4
query_range:
align_queries_with_step: true
ruler:
storage:
s3:
bucketnames: ruler
type: s3
runtime_config:
file: /etc/loki/runtime-config/runtime-config.yaml
schema_config:
configs:
- from: "2024-04-01"
index:
period: 24h
prefix: loki_index_
object_store: s3
schema: v13
store: tsdb
server:
grpc_listen_port: 9095
http_listen_port: 3100
http_server_read_timeout: 600s
http_server_write_timeout: 600s
storage_config:
boltdb_shipper:
index_gateway_client:
server_address: dns+loki-backend-headless.monitoring.svc.cluster.local:9095
hedging:
at: 250ms
max_per_second: 20
up_to: 3
tsdb_shipper:
index_gateway_client:
server_address: dns+loki-backend-headless.monitoring.svc.cluster.local:9095
tracing:
enabled: true
But I don't see any hint of compaction process in the grafana:
@sunidhi271 try setting the working directory to something like /var/loki/compactor
I've tried to enable the compactor in loki-distributed rollout. Every 10m I have the compactor crash with panic
The compactor crashes with retention_enabled: true and false. Below is the config example with retention_enabled: true.
Config
Chart version: 0.69.16 Loki version: 2.8.2
To Reproduce Steps to reproduce the behavior:
Expected behavior Compactor doing its job calmly w/o panic :)
Environment: