strimzi / strimzi-kafka-oauth

OAuth2 support for Apache Kafka® to work with many OAuth2 authorization servers
Apache License 2.0
141 stars 89 forks source link

What is the user id known by kafka when using oauth? #22

Closed jrivers96 closed 4 years ago

jrivers96 commented 4 years ago

I can't seem to figure out which user id is known to kafka when using the oauth plugin. What is the user id known by kafka when using oauth?

Also, is there any paid support/consulting for strimzi clusters?

Below are my current settings.

Keycloak realm { "clientId": "data2kafka-client", "enabled": true, "clientAuthenticatorType": "client-secret", "secret": "REDACTED", "publicClient": false, "bearerOnly": false, "standardFlowEnabled": false, "implicitFlowEnabled": false, "directAccessGrantsEnabled": true, "serviceAccountsEnabled": true, "consentRequired" : false, "fullScopeAllowed" : false, "attributes": { "access.token.lifespan": "32140800" } }

Setting ACLS:

export NAMESPACE_NAME=strimzi-pro
kubectl -n ${NAMESPACE_NAME} exec -ti kafka-cluster-kafka-0 -- bin/kafka-acls.sh --group '*' --topic control --operation All --authorizer-properties zookeeper.connect=127.0.0.1:2181 --add --allow-principal User:data2kafka-client
kubectl -n ${NAMESPACE_NAME} exec -ti kafka-cluster-kafka-0 -- bin/kafka-acls.sh --group '*' --topic control --operation All --authorizer-properties zookeeper.connect=127.0.0.1:2181 --add --allow-principal User:service-account-data2kafka-client

Broker configs

        type: loadbalancer
        tls: true
        authentication:
          type: oauth
          clientId: broker
          clientSecret:
            key: secret
            secretName: broker-oauth-secret
          disableTlsHostnameVerification: false
          jwksEndpointUri: REDACTED
          validIssuerUri: REDACTED
          userNameClaim: preferred_username
          tlsTrustedCertificates:
          - secretName: ca-truststore
            certificate: ca.crt
        overrides:
          bootstrap:
            dnsAnnotations:
              external-dns.alpha.kubernetes.io/hostname: REDACTED
              external-dns.alpha.kubernetes.io/ttl: "60"
              address: REDACTED
...
...
    rack:
      topologyKey: failure-domain.beta.kubernetes.io/zone
    authorization:
      type: simple

Consumer/producer log

2019-12-02 21:36:10.016 DEBUG 7 --- [main] .o.c.JaasClientOauthLoginCallbackHandler : Configured JaasClientOauthLoginCallbackHandler:
    token: null
    refreshToken: null
    tokenEndpointUri: REDACTED
    clientId: data2kafka-client
    clientSecret: a*********
    usernameClaim: preferred_username
2019-12-02 21:36:10.081 DEBUG 7 --- [main] i.s.k.oauth.common.OAuthAuthenticator    : loginWithClientSecret() - tokenEndpointUrl: REDACTED, clientId: data2kafka-client, clientSecret: a*********
...
...
2019-12-03 17:21:09.905  INFO 7 --- [           main] .o.i.e.ExpiringCredentialRefreshingLogin : Successfully logged in.
2019-12-03 17:21:09.964  INFO 7 --- [up2kafka-client] .o.i.e.ExpiringCredentialRefreshingLogin : [Principal=:service-account-data2kafka-client]: Expiring credential re-login thread started.
2019-12-03 17:21:09.967  INFO 7 --- [up2kafka-client] .o.i.e.ExpiringCredentialRefreshingLogin : [Principal=service-account-data2kafka-client]: Expiring credential valid from Tue Dec 03 17:21:09 GMT 2019 to Wed Dec 09 17:21:09 GMT 2020
2019-12-03 17:21:09.968  INFO 7 --- [up2kafka-client] .o.i.e.ExpiringCredentialRefreshingLogin : [Principal=:service-account-data2kafka-client]: Expiring credential re-login sleeping until: Tue Oct 13 20:19:56 GMT 2020

Error from the consumer/producer app

2019-12-03 17:04:16.415  INFO 7 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka version: 2.3.0
2019-12-03 17:04:16.415  INFO 7 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId: fc1aaa116b661c8a
2019-12-03 17:04:16.415  INFO 7 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka startTimeMs: 1575392656415
2019-12-03 17:04:16.416  INFO 7 --- [           main] c.nasdaq.fq.etl.kafka.KafkaAvroConsumer  : [Consumer clientId=consumer-1, groupId=DataStreamControl-26b77f95-1817-4344-bfee-2d386fc77bce] Subscribed to topic(s): control
2019-12-03 17:04:16.532  WARN 7 --- [           main] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-1, groupId=DataStreamControl-26b77f95-1817-4344-bfee-2d386fc77bce] Error while fetching metadata with correlation id 2 : {control=TOPIC_AUTHORIZATION_FAILED}
2019-12-03 17:04:16.534 ERROR 7 --- [           main] org.apache.kafka.clients.Metadata        : [Consumer clientId=consumer-1, groupId=DataStreamControl-26b77f95-1817-4344-bfee-2d386fc77bce] Topic authorization failed for topics [control]
jrivers96 commented 4 years ago

I figured this out. I'm using the user operator and didn't realize that it removes acls done this way

kubectl -n ${NAMESPACE_NAME} exec -ti kafka-cluster-kafka-0 -- bin/kafka-acls.sh --group '*' --topic control --operation All --authorizer-properties zookeeper.connect=127.0.0.1:2181 --add --allow-principal User:service-account-data2kafka-client

I applied something like below to my namespace.

apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaUser
metadata:
  name: service-account-data2kafka-client
  labels:
    strimzi.io/cluster: kafka-cluster
spec:
  authentication:
    type: scram-sha-512
  authorization:
    type: simple
    acls:
     - resource:
         type: topic
         name: control
         patternType: literal
       operation: All
     - resource:
         type: group
         name: 'examplegroup'
         patternType: literal
       operation: All
scholzj commented 4 years ago

Yeah, using the KafkaUser resources is one way. Or if you don't want to, you can just disabel the whole User Operator in the Kafka CR and manage the users your self. Up to you.