confluentinc / jmx-monitoring-stacks

📊 Monitoring examples for Confluent Cloud and Confluent Platform
Apache License 2.0
59 stars 170 forks source link

Update/add dashboards for Prometheus JMX Exporter 1.0.x #230

Open dhoard opened 6 months ago

dhoard commented 6 months ago

The Prometheus JMX Exporter 1.0.x introduced some changes that required configuration and dashboard changes

  1. Metrics are no longer served on the root (/) path. You will be required to change the scrape URL to /metrics

  2. Some JVM metric names have changed to conform with the OpenMetrics specification.

Dashboards will need to be changed if referencing the changed JVM metrics.

https://prometheus.github.io/client_java/migration/simpleclient/#jvm-metrics

  1. MBeans that normalize to the same metric name will now contain a label named _objectname that references the MBean that provided the metric.

Example:

# HELP kafka_rest_jersey_metrics_v3_topics_partitions_reassignment_list_request The request count using a cumulative counter kafka.rest:name=null,type=jersey-metrics,attribute=v3.topics.partitions-reassignment.list.request-total
# TYPE kafka_rest_jersey_metrics_v3_topics_partitions_reassignment_list_request untyped
kafka_rest_jersey_metrics_v3_topics_partitions_reassignment_list_request{_objectname="kafka.rest<type=jersey-metrics><>v3.topics-partitions-reassignment.list.request-total"} 0.0
kafka_rest_jersey_metrics_v3_topics_partitions_reassignment_list_request{_objectname="kafka.rest<type=jersey-metrics><>v3.topics.partitions-reassignment.list.request-total"} 0.0
tsuz commented 4 months ago

@dhoard

Metrics are no longer served on the root (/) path. You will be required to change the scrape URL to /metrics

From my testing on jmx_prometheus_javaagent-1.0.1.jar, this is not true at least for the Confluent components. I haven't tested with clients yet.

Screenshot 2024-07-18 at 9 51 02

Some JVM metric names have changed to conform with the OpenMetrics specification.

I noticed small discrepancy so I will report back if I find something

MBeans that normalize to the same metric name will now contain a label named _objectname that references the MBean that provided the metric.

I have not seen this yet, and AFAIK, I don't know any metrics that normalize to the same name.

Screenshot 2024-07-18 at 9 50 40
dhoard commented 4 months ago

@tsuz

Metrics path

The metrics path change (/metrics) was changed in the underlying Prometheus client_java library to be consistent with other exporters.


image

_objectname

Here is an example output using 7.6.2 where the _objectname is used.

# HELP kafka_rest_jersey_metrics_v3_topics_partitions_reassignment_list_request The request count using a cumulative counter kafka.rest:name=null,type=jersey-metrics,attribute=v3.topics.partitions-reassignment.list.request-total
# TYPE kafka_rest_jersey_metrics_v3_topics_partitions_reassignment_list_request untyped
kafka_rest_jersey_metrics_v3_topics_partitions_reassignment_list_request{_objectname="kafka.rest<type=jersey-metrics><>v3.topics-partitions-reassignment.list.request-total"} 0.0
kafka_rest_jersey_metrics_v3_topics_partitions_reassignment_list_request{_objectname="kafka.rest<type=jersey-metrics><>v3.topics.partitions-reassignment.list.request-total"} 0.0

The _objectname is used when two metrics different metrics are mapped to the same Prometheus name, resulting in a conflict. Previous version of the exporter would throw away one of the metrics.

Notice the dash (-) versus period (.) between topics and partitions

v3.topics-partitions-reassignment.list.request-total
v3.topics.partitions-reassignment.list.request-total

In this specific scenario, they are most likely the same metric, but coding in that assumption into the exporter could lead to incorrect metrics.

I created a Kafka bug: https://issues.apache.org/jira/browse/KAFKA-17161