Open pyo-counting opened 9 months ago
I tested below values.yaml
file and checked compaction and retention is working.
loki:
auth_enabled: false
limits_config:
retention_period: 1d
commonConfig:
replication_factor: 2
storage:
bucketNames:
chunks: kps-shr-tools-s3-loki-test
ruler: kps-shr-tools-s3-loki-test
s3:
region: ap-northeast-2
storage_config:
boltdb_shipper:
active_index_directory: /var/loki/data/index
cache_location: /var/loki/data/boltdb-cache
shared_store: s3
compactor:
working_directory: /var/loki/data/retention
shared_store: s3
retention_delete_delay: 30m
compaction_interval: 10m
retention_enabled: true
retention_delete_worker_count: 150
serviceAccount:
name: loki-sa
imagePullSecrets: []
annotations:
eks.amazonaws.com/role-arn: (...skip...)
rules:
enabled: false
alerting: false
serviceMonitor:
enabled: false
lokiCanary:
enabled: false
write:
replicas: 2
persistence:
storageClass: loki-sc
read:
replicas: 2
persistence:
storageClass: loki-sc
backend:
replicas: 2
persistence:
storageClass: loki-sc
gateway:
enabled: false
extraObjects:
- apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: loki-sc
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-ap
fileSystemId: (...skip...)
directoryPerms: "700"
uid: '{{ .Values.loki.podSecurityContext.runAsUser }}'
gid: '{{ .Values.loki.podSecurityContext.runAsGroup }}'
What did I miss? Let me know, please
Finally, I found the cause. It's a problem caused by the different value between-tsdb.shipper.shared-store.key-prefix
and -compactor.shared-store.key-prefix
.
I simply thought the compactor was using -compactor.shared-store.key-prefix
flag for deletion purposes not for compact and retention. But it wasn't.
I hope this will be added to the official Loki documentation. There are each option for compactor and writer, so there might be people who have the same misconception as me.
Hi, Can you elaborate on this? Did you set each value separately?
@icanhazbeer That's right. I set two runtime flag values to different values.
-tsdb.shipper.shared-store.key-prefix
-compactor.shared-store.key-prefix
And the results of the test are as follows.
-compactor.shared-store.key-prefix
by the compactor.-compactor.shared-store.key-prefix
(Before the test, I thought comfactor was referring to -tsdb.shipper.shared-store.key-prefix
)As a result, we can see that the two flags must always have the same value in order for a comacptor to perform a compaction, retention properly.
Describe the bug compactor retention does not work with tsdb shipper, AWS s3 object storage.
To Reproduce Steps to reproduce the behavior:
Install Helm chart with custom value file(
ssd-values.yaml
){environment="dev" ...}
, tenant id:kurlypay
)Expected behavior A clear and concise description of what you expected to happen.
{environment="dev"}
for global retention does not been marked after 1d(reteion period
) + 2h(retention_delete_delay
)2024-01-26 16:08:11.189+0900
2024-01-27 18:08:11.189+0900
(retention period + retention delete delay)Environment:
Screenshots, Promtail config, or terminal output If applicable, add any output to help explain your problem.
2023-01-28 00:21+0900
)