level=info ts=2024-01-11T10:17:56.155840621Z caller=loki.go:505 msg="Loki started"
level=warn ts=2024-01-11T10:17:56.165650955Z caller=dns_resolver.go:225 msg="failed DNS A record lookup" err="lookup query-scheduler-discovery.moni-loki-agpl.svc.cluster.local. on 100.108.0.10:53: read udp 100.104.2.84:44871->100.108.0.10:53: i/o timeout"
level=info ts=2024-01-11T10:17:57.554422974Z caller=frontend.go:316 msg="not ready: number of schedulers this worker is connected to is 0"
level=warn ts=2024-01-11T10:18:06.157005143Z caller=dns_resolver.go:225 msg="failed DNS A record lookup" err="lookup query-scheduler-discovery.moni-loki-agpl.svc.cluster.local. on 100.108.0.10:53: read udp 100.104.2.84:33803->100.108.0.10:53: i/o timeout"
level=info ts=2024-01-11T10:18:07.555110388Z caller=frontend.go:316 msg="not ready: number of schedulers this worker is connected to is 0"
it looks like the loki read pod encoutner network issue.
if I disable the networkPolicy by networkPolicy.enabled=false, then this issue doesn't occur.
p.s:
another tiny issue:
after installing with HELM, the Installed components always say grafana-agent-operator is installed.
however, actually I diabled the grafana agent from the values. (see below)
To Reproduce
Steps to reproduce the behavior:
HELM values:
Describe the bug the loki read pod log shows:
it looks like the loki read pod encoutner network issue. if I disable the networkPolicy by
networkPolicy.enabled=false
, then this issue doesn't occur.p.s: another tiny issue: after installing with HELM, the Installed components always say
grafana-agent-operator
is installed. however, actually I diabled the grafana agent from the values. (see below)To Reproduce Steps to reproduce the behavior: HELM values:
Expected behavior A clear and concise description of what you expected to happen.
Environment:
Screenshots, Promtail config, or terminal output If applicable, add any output to help explain your problem.