strimzi / strimzi-kafka-operator

Apache Kafka® running on Kubernetes
https://strimzi.io/
Apache License 2.0
4.85k stars 1.3k forks source link

Issue connecting to listener using SCRAM-SHA-512 #4103

Closed alex-ionescu-qualitance closed 3 years ago

alex-ionescu-qualitance commented 3 years ago

Hello everyone,

We have a Kafka cluster which needs to be secured and we have configured 2 listeners using the Strimzi Operator 0.20.0. The one with OAUTHBEARER works correctly but we are unable to connect to the one using SCRAM-SHA-512.

Our Kafka configuration for the listeners looks like this:

spec:
  entityOperator:
    topicOperator: {}
    userOperator: {}
  kafka:
    authorization:
      clientId: kafka-sso
      delegateToKafkaAcls: false
      disableTlsHostnameVerification: false
      grantsRefreshPeriodSeconds: 60
      grantsRefreshPoolSize: 5
      tlsTrustedCertificates:
        - certificate: ca.crt
          secretName: oauth-server-cert
      tokenEndpointUri: >-
        https://.....
      type: keycloak
    config:
      log.message.format.version: '2.6'
      offsets.topic.replication.factor: 3
      transaction.state.log.min.isr: 2
      transaction.state.log.replication.factor: 3
    listeners:
      - authentication:
          type: scram-sha-512
        name: scram
        port: 9092
        tls: false
        type: internal
      - authentication:
          disableTlsHostnameVerification: true
          userNameClaim: preferred_username
          clientId: kafka-sso
          validIssuerUri: 'https://.....'
          maxSecondsWithoutReauthentication: 3600
          tlsTrustedCertificates:
            - certificate: ca.crt
              secretName: oauth-server-cert
          type: oauth
          clientSecret:
            key: secret
            secretName: bridge-oauth-secret
          introspectionEndpointUri: >-
            https://.....
        name: tls
        port: 9093
        tls: false
        type: internal
    logging:
      loggers:
        kafka.root.logger.level: TRACE
      type: inline
    replicas: 3
    storage:
      type: ephemeral
    version: 2.6.0
  zookeeper:
    replicas: 3
    storage:
      type: ephemeral

We are testing using a console consumer:

cat > /tmp/client.properties <<EOF
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="my-user" password="tR6uauWQw74u";
ssl.endpoint.identification.algorithm=
EOF

bin/kafka-console-consumer.sh --bootstrap-server copo-test-kafka-bootstrap:9092 --topic ro.btrl.out.copo.service.nomenclator.findLabels.v1 --from-beginning --consumer.config=/tmp/client.properties

The error message that we get it: [2020-12-14 12:06:10,762] WARN [Consumer clientId=consumer-console-consumer-63952-1, groupId=console-consumer-63952] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)

If we try to connect with a different mechanism we get the correct error: org.apache.kafka.common.errors.UnsupportedSaslMechanismException: Client SASL mechanism 'OAUTHBEARER' not enabled in the server, enabled mechanisms are [SCRAM-SHA-512]

Any ideea how to troubleshoot this or something that we can change? We need this authentication type for AKHQ (which does not support OAUTHBEARER or has OIDC flows correctly implemented).

scholzj commented 3 years ago

Why do you start discussion on Slack and then open an issue as well? Wouldn't just one place be sufficient?

Can you share the full log from the client? It is not really clear how far it got in the connection process. I also don't think the SCRAM-SHA-512 authentication will play well with the Keycloak authorization. Even if you authenticate and connect, you will not be authorized to do anything, or?

alex-ionescu-qualitance commented 3 years ago

Sorry for asking in 2 places. This would be the great place to have this information since other people might have the same question. I spent the weekend browsing through open and closed issues to find if someone had the same issue as me.

From what I understand if I can get passed the authentication I should be able to give permissions to the user I have create using the KafkaUser resource. I do not have any other logs for the client since I use a console consumer inside the pods with Kafka.

scholzj commented 3 years ago

I do not have any other logs for the client since I use a console consumer inside the pods with Kafka.

I'm not sure the disconnected alone suggests that it didn't get far enough to do the authentication. So I think you need to check the logs of the broker and of the cluster operator to figure out whether the 9092 listener has been applied and what else might be the problem.

From what I understand if I can get passed the authentication I should be able to give permissions to the user I have create using the KafkaUser resource.

I'm not sure this would work. I do not remember how well does the Keycloak authenticator fallback to regular ACLs and whether they can be managed by the User Operator for it. So I guess you would need to try it. Unfortunately Kafka supports only one authenticator for all listeners.