Open Mitrofanov opened 5 years ago
Same issue here. It was working on my localhost on MiniKube but not working on AWS EKS.
Experiencing the same issue running on a AWS EKS Cluster. This is in schema-registry though. Using 5.2.1 or 5.2.2
The issue appears to be when dropping the Replicas (brokers) below 3. Even overriding the replication factors to 1 generates the same error logs. Work around is to have 3 replicas. Note anti-affinity baked into statefulset also.
Same issue here as well. Image:confluentinc/cp-kafka:5.2.1 Link: https://github.com/confluentinc/cp-helm-charts/issues/304
@Mitrofanov Are you able to fix the issue?
I had that issue because one of the pod had wrong/incorrect storage class name that created stale entry or bad state and it got never deleted properly.
I deleted all the pods and had that healed on its own in correct sequence. zk, kafaka and rest of all.
Same issue here. I found a mismatch between the deployment.yaml and the launch file in the docker image.
Changing in the deployment.yaml the variable name to KAFKAREST_JMX_PORT (without underscore between KAFKA and REST) fixed for me the problem. I don't know anyway if this is the right way to do it.
https://github.com/confluentinc/cp-docker-images/blob/5.3.1-post/debian/kafka-rest/include/etc/confluent/docker/launch#L33 uses JMX_PORT if KAFKAREST_JMX_PORT is not set, so should work
@dadoeyad Im just facing the same, did you get to fix it?
Hi guys.
Just deployed kafka-rest-proxy, and found that prometheus JMX exporter not works as expected.
After digging i found the following:
In exporter's container log:
In kafka-rest startup logs:
I use the following versions:
Also checked kafka-rest docs, and looks like there is no jmx.port option anymore. Please advice