I am trying to export a metric with labels created by kubernetes_metadata filter.
I'm using the following match:
<filter nginx.error.upstream.timeout>
@type parser
key_name log
<parse>
@type regexp
expression upstream: \"http://(?<upstream>.*?)/.*?\"
</parse>
</filter>
<match nginx.error.upstream.*>
@type prometheus
<metric>
name nginx_upstream_error
type counter
desc The total number of Nginx upstream timeouts.
<labels>
error ${tag_parts[3]}
upstream $.upstream
origin-pod $.kubernetes.pod_name
origin-namespace $.kubernetes.namespace_name
</labels>
</metric>
</match>
Output in /metrics looks like this:
# TYPE nginx_upstream_error counter
# HELP nginx_upstream_error The total number of Nginx upstream timeouts.
nginx_upstream_error{error="timeout",upstream="<ip>:<port>",origin-pod="",origin-namespace=""} 3.0
For some reason $.kubernetes.pod_name and $.kubernetes.namespace_name doesn't work and stay empty.
The source of this tag goes through the kubernetes metadata plugin:
I've validated that the object kubernetes with the specified fields is there (my whole logging infrastructure is based on it).
What is even stranger is that the upstream label works!
Seems like the record accessor does not work on fields came out from k8s enrichment but does work on fields came out of the previous parser.
I am trying to export a metric with labels created by kubernetes_metadata filter. I'm using the following match:
Output in
/metrics
looks like this:For some reason
$.kubernetes.pod_name
and$.kubernetes.namespace_name
doesn't work and stay empty. The source of this tag goes through the kubernetes metadata plugin:I've validated that the object
kubernetes
with the specified fields is there (my whole logging infrastructure is based on it). What is even stranger is that theupstream
label works! Seems like the record accessor does not work on fields came out from k8s enrichment but does work on fields came out of the previous parser.