Open someStrangerFromTheAbyss opened 4 days ago
Deploy with the following 3 read pods, 1 backend pod,1 write pod and 1 gateway pod.
Curious why you aren't following the recommendations from the documentation which is for 3 read, 3 write, 3 backend components? My suspicion is that only having a single Query Scheduler and a single Index Gateway may be part of the problem here, as those support the Read components during querying.
Yeah i just added that last friday to our DEV/QA ENV and the bad Read pod problems seems to dissapear. I havent updated the issue since i want client confirmation that the fix works.
However, should't work even with one backend pod ? Also, it seems many users also has the 499 bug like mentionned in the other issues. I assume they have the same issue. If its necessary for loki to have more then 1 backend pod, should't the app stop working if it has 1 instance of a backend ?
If its necessary for loki to have more then 1 backend pod, should't the app stop working if it has 1 instance of a backend ?
Not necessarily. From my understanding of Kubernetes (I'm a technical writer, not a developer or SysAdmin), you might be scaling up or down, upgrading, pods might be restarting. The idea of having more than one pod is to allow for fluctuations in the number of running pods so that the system stays up if one pod goes down for some reason. It also helps balance the load if there's more work than one pod can handle.
When deploying grafana loki in Simple Scalable with multiple read pods set in a kubernetes cluster, you sometimes end with a loki read pod that cannot execute any queries. This problem reflect in Grafana by a 504 Gateway timeout similar to this issue and is also linked to the 499 nginx issue found here
Expected behavior Grafana loki should deploy no problem and should have a "tainted" read pods for no reason
Environment:
How to replicate: