Open den-is opened 4 months ago
Ran into the same problem...
The serviceMonitor scrapes all "http-metrics"-ports and the gateway has mistakenly (?) been namend the same instead of {{ include "loki.gatewayFullname" . }}
(can't speak to the enterprise-version though).
I fixed it here
@strowi thanks! I am not 100% sure that this is exactly authentic fix for my exact issue. Just my IMHO. I don't like when ports are not names after generic protocol names. (http, grpc, etc with whatever prefixes/suffixes)
But it 100% fits the original port name issue https://github.com/grafana/loki/issues/12963 and probably your naming strategy can be discussed there.
@den-is totally agree! And i didn't know about the original issue. I just had to quick-fix it in out deployment and used a the service-name/var from the ingress-definition ;)
I created an PR to fix this. As a temporary workaround you could just set these label on the service:
gateway:
service:
labels:
prometheus.io/service-monitor: "false"
We've just upgraded to the latest Loki version (was a pain by the way). The default service monitor still tries to scrape metrics from the non existent metrics endpoint of the gateway.
Can you please provide the endpoint or update the servicemonitor?
This issue is open since Jun 12. What is keeping you from fixing it?
Describe the bug Default Loki's ServiceMonitor is making Prometheus to scrape loki-gateway's non-existent
/metrics
endpointTo Reproduce Steps to reproduce the behavior:
gateway
enabledkube-prometheus-stack
Expected behavior No red/down targets in prometheus.
No 404 errors in loki-gateway logs.
Ignore loki-gateway in ServiceMonitor or fix /metrics endpoint for loki-gateway.
Environment:
Screenshots, Promtail config, or terminal output
loki-gateway log message
curl output from loki-gateway
Host localhost:8080 was resolved.
IPv6: ::1
IPv4: 127.0.0.1
Trying [::1]:8080...
Connected to localhost (::1) port 8080
<
404 Not Found
Connection #0 to host localhost left intact
prometheus failed targets