Open wolf666666 opened 6 months ago
resolved by set -query-scheduler.use-scheduler-ring=false I print the config with parameter -print-config-stderr, I find the default value of query-scheduler.use-scheduler-ring is true, i don't know why
is it really a resolution to disable -query-scheduler.use-scheduler-ring?
It looks like all rings are empty. while
memberlist:
node_name: ""
randomize_node_name: false
stream_timeout: 1m0s
retransmit_factor: 4
pull_push_interval: 2m0s
gossip_interval: 200ms
gossip_nodes: 3
gossip_to_dead_nodes_time: 2m30s
dead_node_reclaim_time: 5m0s
compression_enabled: true
advertise_addr: ""
advertise_port: 7946
cluster_label: ""
cluster_label_verification_disabled: false
join_members:
- lokiread-k8s1:7946
- lokiread-k8s2:7946
- lokiread-k8s3:7946
and
index_gateway:
mode: ring
ring:
kvstore:
store: memberlist
The
http://lokiread-k8s1:3100/indexgateway/ring
is empty. Config like this was just fine in 2.x.x
So the same seems to be with a scheduler-ring. Why is the solution to disable it?
ah, it does not work with target=read
!
I build a loki v3.0.0 on my k3s cluster and this problem is occur, but some time ago (15 30sencods) maybe , it will be health? why?
I build a loki v3.0.0 on my k3s cluster and this problem is occur, but some time ago (15 30sencods) maybe , it will be health? why?
me too, i need help with config grafana loki and ipv6 eks cluster.
{"caller":"ring_watcher.go:56","component":"querier-scheduler-worker","err":"empty ring","level":"error","msg":"error getting addresses from ring","ts":"2024-06-19T14:59:08.87584763Z"}
How i solved it with simple scalable install.
loki:
auth_enabled: false
commonConfig:
ring:
kvstore:
# Backend storage to use for the ring. Supported values are: consul, etcd,
# inmemory, memberlist, multi.
# CLI flag: -common.storage.ring.store
store: memberlist
Are you running it on kubernetes / docker / swarm ? For some reason I have a problem to run a SSD with memberlist in swarm enviroment :/
Describe the bug When upgrade the loki version(2.9.3 or 2.9.5), the loki-querier componient, the log always have error logs:
Mar 05 10:57:16 seliius29524 loki-2.9.3[3950949]: level=error ts=2024-03-05T09:57:16.116271811Z caller=ring_watcher.go:56 component=querier-scheduler-worker msg="error getting addresses from ring" err="empty ring" Mar 05 10:57:19 seliius29524 loki-2.9.3[3950949]: level=error ts=2024-03-05T09:57:19.115903431Z caller=ring_watcher.go:56 component=querier-scheduler-worker msg="error getting addresses from ring" err="empty ring"
I did not config querier-scheduler, the query-frontend has configed frontend.downstream-url
To Reproduce Steps to reproduce the behavior:
Expected behavior correct the error log.
Environment: