confluentinc / schema-registry

Confluent Schema Registry for Kafka
https://docs.confluent.io/current/schema-registry/docs/index.html
Other
2.17k stars 1.11k forks source link

schema-registry JMX metrics not working #2502

Open mgutha opened 1 year ago

mgutha commented 1 year ago

Not sure if i have the right configuration set in the deployment; schema registry is not exposing any metrics over the port defined in deployment, below is deployment file & pod status & logs from pods.

Any advice on this is helpful

apiVersion: apps/v1
kind: Deployment
metadata:
  name: schemaregistry
spec:
  replicas: 3
  selector:
    matchLabels:
      app: schemaregistry
  template:
    metadata:
      labels:
        app: schemaregistry
    spec:
      restartPolicy: Always
      containers:
      - name: schemaregistry
        image: */schemaregistry:v2
        ports:
          - containerPort: 8081
        env:
          - name: SCHEMA_REGISTRY_HOST_NAME
            valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          - name: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
            value: 'SASL_PLAINTEXT://X.X.X.X:9092'
          - name: SCHEMA_REGISTRY_LISTENERS
            value: 'http://0.0.0.0:8081'
          - name: SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL
            value: 'SASL_PLAINTEXT'
          - name: SCHEMA_REGISTRY_KAFKASTORE_SASL_MECHANISM
            value: 'SCRAM-SHA-256'
          - name: SCHEMA_REGISTRY_DEBUG
            value: 'false'
          - name: SCHEMA_REGISTRY_LOG4J_ROOT_LOGLEVEL
            value: INFO
          - name: SCHEMA_REGISTRY_INIT_TIMEOUT_MS
            value: '800000'
          - name: SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG
            value: 'org.apache.kafka.common.security.scram.ScramLoginModule required username="svc_schema_registry" password="xxxxx";'
          - name: JMX_PORT
            value: "5555"
          - name: JMX_ENABLED
            value: "true"
  NAME                              READY   STATUS    RESTARTS   AGE
schemaregistry-66c4f89dc8-67p88   2/2     Running   0          16m
schemaregistry-66c4f89dc8-d78qk   2/2     Running   0          16m
schemaregistry-66c4f89dc8-km4vh   2/2     Running   0          17m
[appuser@schemaregistry-66c4f89dc8-67p88 ~]$ curl localhost:8081/subjects |python -m json.tool
[
    "Kafka-key",
    "Kafka-value",
    "my-cool-subject"
]
[appuser@schemaregistry-66c4f89dc8-67p88 ~]$ curl localhost:5555
curl: (52) Empty reply from server 

logs from pods


[2023-01-19 03:52:05,069] INFO ConsumerConfig values:
    allow.auto.create.topics = true
    auto.commit.interval.ms = 5000
    auto.offset.reset = earliest
    bootstrap.servers = [SASL_PLAINTEXT://X.X.X.X:9092]
    check.crcs = true
    client.dns.lookup = use_all_dns_ips
    client.id = KafkaStore-reader-_schemas
    client.rack =
    connections.max.idle.ms = 540000
    default.api.timeout.ms = 60000
    enable.auto.commit = false
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = schema-registry-10.213.0.153-8081
    group.instance.id = null
    heartbeat.interval.ms = 3000
    interceptor.classes = []
    internal.leave.group.on.close = true
    internal.throw.on.fetch.stable.offset.unsupported = false
    isolation.level = read_uncommitted
    key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 500
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = [hidden]
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.connect.timeout.ms = null
    sasl.login.read.timeout.ms = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.login.retry.backoff.max.ms = 10000
    sasl.login.retry.backoff.ms = 100
    sasl.mechanism = SCRAM-SHA-256
    sasl.oauthbearer.clock.skew.seconds = 30
    sasl.oauthbearer.expected.audience = null
    sasl.oauthbearer.expected.issuer = null
    sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
    sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
    sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
    sasl.oauthbearer.jwks.endpoint.url = null
    sasl.oauthbearer.scope.claim.name = scope
    sasl.oauthbearer.sub.claim.name = sub
    sasl.oauthbearer.token.endpoint.url = null
    security.protocol = SASL_PLAINTEXT
    security.providers = null
    send.buffer.bytes = 131072
    session.timeout.ms = 45000
    socket.connection.setup.timeout.max.ms = 30000
    socket.connection.setup.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm = https
    ssl.engine.factory.class = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.certificate.chain = null
    ssl.keystore.key = null
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.certificates = null
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
 (org.apache.kafka.clients.consumer.ConsumerConfig)```
OneCricketeer commented 1 year ago

JMX_PORT does not start at HTTP server, so it's unclear what you expected. Plus, you only have one container port defined for port 8080, not 5555...

This repo isn't responsible for anything related to kubernetes, so perhaps you should ask on the Confluent forums or Stackoverflow? Otherwise, you need to use tools like jmxterm, rather than curl...

Otherwise, perhaps you seem to think this repo comes with a JMX exporter like the Helm Charts?

maixiaohai commented 1 year ago

You can use jmx_prometheus_javaagent jar which starts with sr refer to https://github.com/prometheus/jmx_exporter