aws / aws-msk-iam-sasl-signer-js

Apache License 2.0
14 stars 2 forks source link

Re-authentication fails with OAUTHBEARER when roleArn is used in the default credential #19

Open BDeus opened 8 months ago

BDeus commented 8 months ago

Checkboxes for prior research

Describe the bug

Hi, i think there are the same issue as the Java one before fixing it in 2.0.2 SASL OAUTHBEARER authentication failed: Cannot change principals during re-authentication from IAM.arn:aws:sts::xxxx:assumed-role/xxxx/276eccf3-xxxx-role: IAM.arn:aws:sts::xxxx:assumed-role/xxxx/b489ee67-xxxx-role

Could we apply the same logic (refreshing of credentials before Oauth token generation) ?

aws-msk-iam-sasl-signer-js library version used

1.0.0

Which Node.js version is this issue in?

20.10.0

Operating System and version

linux

Reproduction Steps

Using default credential provider

async oauthBearerTokenProvider(region: string) {
    // Uses AWS Default Credentials Provider Chain to fetch credentials
    const authTokenResponse = await generateAuthToken({ region });
    return {
        value: authTokenResponse.token,
    };
}

Observed Behavior

With kafkajs and aws-msk-iam-sasl-signer-js SASL OAUTHBEARER authentication failed: Cannot change principals during re-authentication from IAM.arn:aws:sts::xxxx:assumed-role/xxxx/276eccf3-xxxx-role: IAM.arn:aws:sts::xxxx:assumed-role/xxxx/b489ee67-xxxx-role

Expected Behavior

Avoid KafkaJSNonRetriableError when refreshing credentials

Possible Solution

Refreshing of credentials before Oauth token generation

Additional Information/Context

No response

agarwal1510 commented 8 months ago

Thank you for raising this. We will take a look and get back.

jeevchiran commented 7 months ago

Any further updates on this issue? or possible workaround ?

sankalpbhatia commented 7 months ago

Do we know which credentials provider is being used here via the default credentials chain? Are these the same roles but with different "assumed role arns", or a different role?

You can try using the awsDebugCreds feature we provide in this library to figure this out. The workaround should be to ensure in some way the arn of the caller identity is same in both cases.

Nevertheless, for us to investigate further, it will be helpful if you can share what credentials provider is being used here to generate the tokens.

BDeus commented 7 months ago

Hi, for my case, it's the same role that is been used (same Role ARN) but the RoleSessionName change. example:

sankalpbhatia commented 7 months ago

Would it be possible to share debug logs of the kafka client? One way to workaround this is to ensure the "role session name" you mention shouldn't change across tokens. We are interested in knowing which credentials provider in the credentials provider chain is being used for fetching the credentials. Once we know that, we can investigate further.

sujayvenaik commented 2 months ago

@sankalpbhatia is there any update on the fix here? we are also facing the same issue.

sujayvenaik commented 2 months ago

@BDeus, have you solved this issue at your end or found a temporary workaround?

jeevchiran commented 2 months ago

@sujayvenaik We tried one workaround, setting the env variable AWS_ROLE_SESSION_NAME with any random value, which solves the problem

sujayvenaik commented 2 months ago

@jeevchiran Hey! I think this is working for us on Kubernetes based compute. But is still failing for us when we use this in Elastic Beanstalk (ec2 instances).

What kind of compute were you using?

sankalpbhatia commented 1 month ago

@sujayvenaik apologies for not responding earlier. Would it be possible to share client level debug logs? If you can also share the steps required to repro this on our end, that would be helpful too.

Thanks