Open mseiwald opened 1 year ago
When setting --monitoring.metrics-interval
to e.g. 24h
the metrics show up in the /metrics
endpoint but Prometheus does not accept them because they are too old:
ts=2022-09-22T07:10:12.622Z caller=scrape.go:1600 level=warn component="scrape manager" scrape_pool=serviceMonitor/logging-monitoring/prometheus-prometheus-stackdriver-exporter/0 target=http://10.140.5.111:9255/metrics msg="Error on ingesting samples that are too old or are too far into the future" num_dropped=113
you can try and use the following, then adjust with offset when you're drawing the graph
serviceMonitor:
honorTimestamps: false
A bit of an oldie, but had same issue today. Setting larger interval does help yes, but better use --monitoring-metrics-ingestDelay
. This actually takes the offset from the metrics metadata:
And uses it to set offset when requesting metrics from your GCP project.
And this is all due to BQ metrics https://cloud.google.com/monitoring/api/metrics_gcp#gcp-bigquery having a delay of visibility after sampling (every metric different one). So if you don't set the offset manually (or use ingestDelay) for all of those BQ metrics it will return no data.
Hello,
I'm running prometheus stackdriver exporter to collect Dataflow and BigQuery metrics. The strange thing is that Dataflow metrics (and others) work fine, however BigQuery metrics simply do not show up. No BigQuery metrics show up on the prometheus search and
curl localhost:9255/metrics | grep bigquery
to the stackdriver exporter pod also doesn't yield any results.stackdriver exporter args:
stackdriver exporter logs (debug):
Any ideas?