factorhouse / kpow

Kpow for Apache Kafka
https://factorhouse.io/kpow
Other
37 stars 5 forks source link

AWS MSK IAM AUTH not working #3

Closed rampaldheeraj closed 2 years ago

rampaldheeraj commented 2 years ago

Hi, I am trying to evaluate kpow locally from the docker image. I have a kafka-cluster setup in an aws with only IAM Authentication enabled. I am following the instructions given here to setup my connection fields. I am using the following command to spin up the container docker run -p 3000:3000 --env-file ./config.env operatr/kpow:latest

I am setting the connection fields config.env file along with the license and broker details.

I am getting the following exception

An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: Failed to find AWS IAM Credentials [Caused by com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [software.amazon.msk.auth.iam.internals.EnhancedProfileCredentialsProvider@4528e3d: Profile file contained no credentials for profile 'xxxxx-xxxxxx': ProfileFile(profiles=[]), com.amazonaws.auth.AWSCredentialsProviderChain@105bf791: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), WebIdentityTokenCredentialsProvider: You must specify a value for roleArn and roleSessionName, software.amazon.msk.auth.iam.internals.EnhancedProfileCredentialsProvider@6dea8cc6: Profile file contained no credentials for profile 'default': ProfileFile(profiles=[]), com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper@2bb4f04: Failed to connect to service endpoint: ]]]) occurred when evaluating SASL token received from the Kafka Broker. Kafka Client will go to AUTHENTICATION_FAILED state. 04:27:15.850 ERROR [main] operatr.kafka – Error fetching cluster id java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.SaslAuthenticationException: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: Failed to find AWS IAM Credentials [Caused by com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [software.amazon.msk.auth.iam.internals.EnhancedProfileCredentialsProvider@4528e3d: Profile file contained no credentials for profile 'xxxx-xxxxxx': ProfileFile(profiles=[]), com.amazonaws.auth.AWSCredentialsProviderChain@105bf791: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), WebIdentityTokenCredentialsProvider: You must specify a value for roleArn and roleSessionName, software.amazon.msk.auth.iam.internals.EnhancedProfileCredentialsProvider@6dea8cc6: Profile file contained no credentials for profile 'default': ProfileFile(profiles=[]), com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper@2bb4f04: Failed to connect to service endpoint: ]]]) occurred when evaluating SASL token received from the Kafka Broker. Kafka Client will go to AUTHENTICATION_FAILED state.

I have my aws profile xxxxx-xxxxxx (name masked) configured properly in credentials file. Also I am able to connect to the kafka-cluster from my local using kafka-cli, which also requires setting up the JAAS config in the similar manner.

Can you please help me in resolving this issue?

Regards Dheeraj

d-t-w commented 2 years ago

Hi @rampaldheeraj thanks for the report.

I think there are a couple of factors at play here, and they have to do with what the kPow container can see on your local machine (not very much) and how to provide visibility of required things (either with volume mounts or more environment variables).

By default the kPow container has no access to the file system of the host machine when you run it as you have:

docker run -p 3000:3000 --env-file ./config.env operatr/kpow:latest

That is important, because you have most likely configured a SSL_TRUSTSTORE_LOCATION in your config.env. That truststore is on your local machine. We need to provide that physical truststore to the kPow container as well.

There are a few ways to manage this for production setups, but in your case on a local machine the easiest thing to do is mount a volume that contains the truststore, mapping from your local machine to a path within the container.

Here's a command we use to provide a volume mount from a local directory (containing the truststore), relative to where the docker command is run on macos, to a fixed '/ssl' path within the container.

docker run --volume="$PWD/ssl:/ssl" --env-file ./config.env operatr/kpow:latest

Then configure your truststore in config.env to be located at the mounted volume path:

SSL_TRUSTSTORE_LOCATION=/ssl/your-truststore.jks

The error output in your example is related to this local filesystem / volume mounts issue. In your specific case the container itself has no concept of what AWS account is in use.

I have my aws profile xxxxx-xxxxxx (name masked) configured properly in credentials file. Also I am able to connect to the kafka-cluster from my local using kafka-cli, which also requires setting up the JAAS config in the similar manner.

It's most likely that the kafka-cli on local is using the default credential profiles file - typically located at ~/.aws/credentials (location can vary per platform), and shared by many of the AWS SDKs and by the AWS CLI.

As with the truststore, this credentials file in not available to the container unless we provide it as a volume mount. You will need to either mount that credentials file or provide the credentials via another method. AWS looks in a number of places for credentials, see the default provider chain details here: https://github.com/aws/aws-msk-iam-auth#configuring-a-kafka-client-to-use-aws-iam

You could, for example, set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in your config.env for testing purposes.

Note in a normal deployment of kPow to ECS/Fargate or EKS or EC2 or similar these credentials are already in place and available to kPow - it's only in your case of running from a local machine that you need to do slightly more work.

If you need any more help just pop an email over to support@operatr.io

Best, Derek