Open sharathfeb12 opened 2 years ago
I have the same issue, when ingester scale down, it doesn't leave the ring and it marked as unhealthy in the ring.
+1 on the same issue we are also facing similar issues
level=warn ts=2023-09-25T17:13:55.410815291Z caller=logging.go:86 traceID=abc orgID=fake msg="POST /loki/api/v1/push (500) 204.393µs Response: \"at least 2 live replicas required, could only find 1 - unhealthy instances: 1.2.3.4:9095\\n\" ws: false; Content-Length: 6243; Content-Type: application/x-protobuf; User-Agent: promtail/2.8.4; X-B3-Parentspanid: addd; X-B3-Sampled: 0; X-B3-Spanid: 8cf855c24430fce7; X-B3-Traceid: sss; X-Envoy-Attempt-Count: 1; X-Envoy-External-Address: 136.147.62.8; X-Forwarded-Client-Cert: Hash=abc;Cert=\"-----BEGIN%20CERTIFICATE-----
+1 on this.. I have also made the following changes. Just for context, we are using Loki OSS, running on 3 replicas with a replication factor of 2.
memberlist.rejoin_interval: 30s
wal.enabled: false
+1 on this. I am using loki-distributed . Do we have a fix for this ?
│ level=warn ts=2024-06-25T08:28:15.471738932Z caller=logging.go:123 traceID=49e7eefb1868e86a orgID=fake msg="POST /loki/api/v1/push (500) 401.433µs Response: \"at least 1 live replicas required, could only find 0 - unhealthy instances: 172.39.3.250:9095\\n\" ws: false; Accept: */*; Connection: close; Content-Length: 311; Content-Type: application/json; User-Agent: curl/7.81.0; " │
Describe the bug I have enabled autoforget_unhealthy for ingesters. When ingester pod starts running, it mentions the same.
It then complains that there is an instance with problem and asks me to manually cleanup on /ring endpoint.
To Reproduce Steps to reproduce the behavior: Restarted ingesters after setting autoforget_unhealthy flag to true.
Expected behavior Expected the unhealthy ingesters to be cleaned automatically.
Environment:
Config: