confluentinc / cp-docker-images

[DEPRECATED] Docker images for Confluent Platform.
Apache License 2.0
1.14k stars 704 forks source link

ZooKeeper without SASL not supported in io.confluent.admin.utils.cli. KafkaReadyCommand #205

Open engrean opened 7 years ago

engrean commented 7 years ago

We've managed to get the Kafka brokers working with SASL_PLAINTEXT w/o SASL enabled on ZooKeeper. We did this by setting a System property of zookeeper.sasl.client=false and setting an environment variable to KAFKA_ZOOKEEPER_SET_ACL=false.

Neither of these settings worked in the SchemaRegistry docker images. After looking into it more, I found that in KafkaReadyCommand, there is no logic for disabling SASL with ZooKeeper. The logic is actually in io.confluent.admin.utils.ClusterStatus which KafkaReadyCommand calls.

The logic is:

boolean isSASLEnabled = false; if (System.getProperty(KAFKA_ZOOKEEPER_SET_ACL, null) != null) { isSASLEnabled = true; log.info("SASL is enabled. java.security.auth.login.config={}", System.getProperty(JAVA_SECURITY_AUTH_LOGIN_CONFIG)); }

I think I saw similar logic elsewhere (sorry I can't remember where, but that's where I found the reference to zookeeper.sasl.client and KAFKA_ZOOKEEPER_SET_ACL) that uses both the KAFKA_ZOOKEEPER_SET_ACL and another property to decide how to communicate with ZooKeeper.

Commenting this test out in include/etc/confluent/docker/ensure, got the cluster up and running for me.

This is for Confluent 3.1.1.

samjhecht commented 7 years ago

Hi @engrean - you can set ZOOKEEPER_SASL_ENABLED to 'FALSE', and the KakfaReady command should work with ZK with no security enabled. This is how the behavior defined: https://github.com/confluentinc/cp-docker-images/blob/f432909b3323ff033338c6f4bec36b555f8d3caa/debian/base/include/cub#L94-L102

And here is an example from the tests that you can refer to: https://github.com/confluentinc/cp-docker-images/blob/b94e748181e94f6a47577a403096cf7df2a37ed8/tests/fixtures/debian/kafka/cluster-bridged-sasl.yml#L97

Can you follow-up to let us know if this ends up working for you?

DevonPeroutky commented 6 years ago

@samjhecht I think I am running into the same issue, trying to run the Control Center as a Docker container with Kubernetes. I have a Kafka + Zookeeper cluster with Kafka configured to use PLAINTEXT_SASL and Zookeeper without any authentication. My configuration looks like:

        - name: confluent-control-center
          image: confluentinc/cp-enterprise-control-center:5.0.0
          ports:
            - name: control-port
              containerPort: 9021
              hostPort: 9021
          env:
            - name: CONTROL_CENTER_SASL_MECHANISM
              value: "PLAIN"
            - name: CONTROL_CENTER_CONSUMER_SASL_MECHANISM
              value: "PLAIN"
            - name: CONTROL_CENTER_SECURITY_PROTOCOL
              value: SASL_PLAINTEXT
            - name: CONTROL_CENTER_CONSUMER_SECURITY_PROTOCOL
              value: SASL_PLAINTEXT
            - name: CONTROL_CENTER_SASL_JAAS_CONFIG
              value: org.apache.kafka.common.security.plain.PlainLoginModule required username="$(KAFKA_SASL_USERNAME)" password="$(KAFKA_SASL_PASSWORD)";
            - name: CONTROL_CENTER_CONSUMER_SASL_JAAS_CONFIG
              value: org.apache.kafka.common.security.plain.PlainLoginModule required username="$(KAFKA_SASL_USERNAME)" password="$(KAFKA_SASL_PASSWORD)";
            - name: CONTROL_CENTER_ZOOKEEPER_CONNECT
              value: kafka-cluster-3-zk-0:2181
            - name: CONTROL_CENTER_BOOTSTRAP_SERVERS
              value: kafka-cluster-3-kafka-0:9092
            - name: CONTROL_CENTER_REPLICATION_FACTOR
              value: "1"
            - name: CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS
              value: "1"
            - name: CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS
              value: "1"
            - name: CONFLUENT_METRICS_TOPIC_REPLICATION
              value: "1"
            - name: CONTROL_CENTER_PORT
              value: "9021"
            - name: CONTROL_CENTER_CONNECT_CLUSTER
              value: kafka-connect:8083
            - name: CONTROL_CENTER_ZOOKEEPER_SASL_ENABLED
              value: "FALSE"
            - name: ZOOKEEPER_SASL_ENABLED
              value: "FALSE"

I am able to connect to the cluster with Producer/Consumer clients. However, the control center has the following issue:

echo "===> Running preflight checks ... "
+ echo '===> Running preflight checks ... '
/etc/confluent/docker/ensure
+ /etc/confluent/docker/ensure
===> Check if Kafka is healthy ...

echo "===> Check if Kafka is healthy ..."
+ echo '===> Check if Kafka is healthy ...'

cub kafka-ready "${CONTROL_CENTER_REPLICATION_FACTOR}" \
  "${CONTROL_CENTER_CUB_KAFKA_TIMEOUT:-40}" \
  -b "${CONTROL_CENTER_BOOTSTRAP_SERVERS}" \
  --config "${CONTROL_CENTER_CONFIG_DIR}/admin.properties"
+ cub kafka-ready 1 40 -b kafka-cluster-3-kafka-0:9092 --config /etc/confluent-control-center/admin.properties
[main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: 
    bootstrap.servers = [kafka-cluster-3-kafka-0:9092]
    client.id = 
    connections.max.idle.ms = 300000
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 120000
    retries = 5
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS

[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.0.0-cpNone
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : ca8d91be74ec83ed
[kafka-admin-client-thread | adminclient-1] INFO org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Metadata update failed
org.apache.kafka.common.errors.DisconnectException: Cancelled fetchMetadata request with correlation id 17 due to node -1 being disconnected

Any idea what could be going on? Is this related to this issue, or is something else going on?

samzx commented 5 years ago

If you're using kubernetes, passing in

          - ZOOKEEPER_SASL_ENABLED=FALSE

to the args: worked for me. It will skip broker-zookeeper authentication. Without it, it would try connect to zoo keeper and time out (because it's not configured).

csuryac commented 4 years ago

i am getting this same issue . what is the solution for this

[main-SendThread(cle-cp-zookeeper-perf:2181)] WARN org.apache.zookeeper.ClientCnxn - SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/etc/kafka/secrets/kafka_server_jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. [main-SendThread(cle-cp-zookeeper-perf:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server cle-cp-zookeeper-perf/10.110.111.246:2181 [main] ERROR io.confluent.admin.utils.ClusterStatus - Error occurred while connecting to Zookeeper server[cle-cp-zookeeper-perf:2181]. Authentication failed. [main-SendThread(cle-cp-zookeeper-perf:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to cle-cp-zookeeper-perf/10.110.111.246:2181, initiating session [main-SendThread(cle-cp-zookeeper-perf:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server cle-cp-zookeeper-perf/10.110.111.246:2181, sessionid = 0x1000f5f10ac0003, negotiated timeout = 40000 [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x1000f5f10ac0003 closed

suraj2410 commented 4 years ago

getting the same issue as @csuryac any leads?