Open vpotap opened 3 months ago
hi
this is my values.yaml
loki:
auth_enabled: false
commonConfig:
replication_factor: 1
ingester:
chunk_encoding: snappy
limits_config:
max_cache_freshness_per_query: 10m
max_entries_limit_per_query: 5000
query_timeout: 300s
reject_old_samples: true
reject_old_samples_max_age: 31d
split_queries_by_interval: 15m
volume_enabled: true
querier:
max_concurrent: 2
schemaConfig:
configs:
- from: "2024-04-01"
index:
period: 24h
prefix: loki_index_
object_store: s3
schema: v13
store: tsdb
storage:
bucketNames:
admin: loki
chunks: loki
ruler: loki
type: s3
s3:
s3: loki
region: eu-west-2
secretAccessKey: secret
accessKeyId: access
s3ForcePathStyle: true
insecure: false
storage_config:
aws:
bucketnames: loki
#s3: s3://eu-west-2
s3: s3://access:secret@eu-west-2
tsdb_shipper:
active_index_directory: /loki/index
cache_location: /loki/index_cache
tracing:
enabled: true
deploymentMode: SingleBinary
singleBinary:
extraEnv:
- name: GOMEMLIMIT
value: 3750MiB
replicas: 1
resources: {}
persistence:
enabled: false
storageClass: "gp2"
#serviceAccount:
# create: false
# name: loki-sa
chunksCache:
# -- Specifies whether memcached based chunks-cache should be enabled
enabled: false #true
resources: {}
writebackSizeLimit: 10MB
lokiCanary:
enabled: false
resultsCache:
enabled: false
test:
enabled: false
backend:
replicas: 0
bloomCompactor:
replicas: 0
bloomGateway:
replicas: 0
compactor:
replicas: 0
distributor:
replicas: 0
indexGateway:
replicas: 0
ingester:
replicas: 0
minio:
enabled: false
querier:
replicas: 0
queryFrontend:
replicas: 0
queryScheduler:
replicas: 0
read:
replicas: 0
write:
replicas: 0
if i disable persistance, i keep getting following errors
init compactor: mkdir /var/loki: read-only file system
error initialising module: compactor
github.com/grafana/dskit/modules.(*Manager).initModule
/src/loki/vendor/github.com/grafana/dskit/modules/modules.go:138
github.com/grafana/dskit/modules.(*Manager).InitModuleServices
/src/loki/vendor/github.com/grafana/dskit/modules/modules.go:108
github.com/grafana/loki/v3/pkg/loki.(*Loki).Run
/src/loki/pkg/loki/loki.go:458
main.main
/src/loki/cmd/loki/main.go:129
runtime.main
/usr/local/go/src/runtime/proc.go:271
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1695
level=info ts=2024-09-04T17:58:41.669415031Z caller=main.go:126 msg="Starting Loki" version="(version=release-3.1.x-89fe788, branch=release-3.1.x, revision=89fe788d)"
level=info ts=2024-09-04T17:58:41.669444085Z caller=main.go:127 msg="Loading configuration file" filename=/etc/loki/config/config.yaml
level=info ts=2024-09-04T17:58:41.670815916Z caller=server.go:352 msg="server listening on addresses" http=:3100 grpc=:9095
level=info ts=2024-09-04T17:58:41.672152478Z caller=memberlist_client.go:435 msg="Using memberlist cluster label and node name" cluster_label= node=loki-0-0d0ff337
level=info ts=2024-09-04T17:58:41.672894546Z caller=memberlist_client.go:541 msg="memberlist fast-join starting" nodes_found=1 to_join=4
level=error ts=2024-09-04T17:58:41.672948394Z caller=log.go:216 msg="error running loki" err="init compactor: mkdir /var/loki: read-only file system\nerror initialising module: compactor\ngithub.com/grafana/dskit/modules.(*Manager).initModule\n\t/src/loki/vendor/gith
ub.com/grafana/dskit/modules/modules.go:138\ngithub.com/grafana/dskit/modules.(*Manager).InitModuleServices\n\t/src/loki/vendor/github.com/grafana/dskit/modules/modules.go:108\ngithub.com/grafana/loki/v3/pkg/loki.(*Loki).Run\n\t/src/loki/pkg/loki/loki.go:458\nmain.ma
in\n\t/src/loki/cmd/loki/main.go:129\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:271\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1695"
@pen-pal, the initial question was about combining filesystem and S3 in the schema_config. Is it possible to use old logs from the filesystem while simultaneously storing new logs in S3? Currently, S3 works fine in my configuration without the filesystem setup, and the filesystem also works without the S3 setup.
could you may be share me your values.yaml please for reference @vpotap
@vpotap can you share me your config so i can reference with s3 without using persistence?
Questions have a better chance of being answered if you ask them on the community forums.
Context: According to the Loki documentation, it seems possible to combine filesystem and s3 in the schema_config as shown in the example below:
Problem: I've tried multiple configurations based on the above setup, but after adding the S3 configuration, old logs stored in filesystem/boltdb-shipper are no longer visible in Grafana.
Question: Does this configuration work out of the box? Should I use a Python script or some other method to migrate the old logs from the filesystem to S3 first?