Open BDeus opened 8 months ago
Thank you for raising this. We will take a look and get back.
Any further updates on this issue? or possible workaround ?
Do we know which credentials provider is being used here via the default credentials chain? Are these the same roles but with different "assumed role arns", or a different role?
You can try using the awsDebugCreds
feature we provide in this library to figure this out. The workaround should be to ensure in some way the arn of the caller identity is same in both cases.
Nevertheless, for us to investigate further, it will be helpful if you can share what credentials provider is being used here to generate the tokens.
Hi, for my case, it's the same role that is been used (same Role ARN) but the RoleSessionName change. example:
IAM.arn:aws:sts::#ACCOUNT#:assumed-role/my-custom-role/339bc935-my-custom-role
IAM.arn:aws:sts::#ACCOUNT#:assumed-role/my-custom-role/c0a362f1-my-custom-role
Would it be possible to share debug logs of the kafka client? One way to workaround this is to ensure the "role session name" you mention shouldn't change across tokens. We are interested in knowing which credentials provider in the credentials provider chain is being used for fetching the credentials. Once we know that, we can investigate further.
@sankalpbhatia is there any update on the fix here? we are also facing the same issue.
@BDeus, have you solved this issue at your end or found a temporary workaround?
@sujayvenaik We tried one workaround, setting the env variable AWS_ROLE_SESSION_NAME with any random value, which solves the problem
@jeevchiran Hey! I think this is working for us on Kubernetes based compute. But is still failing for us when we use this in Elastic Beanstalk (ec2 instances).
What kind of compute were you using?
@sujayvenaik apologies for not responding earlier. Would it be possible to share client level debug logs? If you can also share the steps required to repro this on our end, that would be helpful too.
Thanks
Checkboxes for prior research
Describe the bug
Hi, i think there are the same issue as the Java one before fixing it in 2.0.2
SASL OAUTHBEARER authentication failed: Cannot change principals during re-authentication from IAM.arn:aws:sts::xxxx:assumed-role/xxxx/276eccf3-xxxx-role: IAM.arn:aws:sts::xxxx:assumed-role/xxxx/b489ee67-xxxx-role
Could we apply the same logic (refreshing of credentials before Oauth token generation) ?
aws-msk-iam-sasl-signer-js library version used
1.0.0
Which Node.js version is this issue in?
20.10.0
Operating System and version
linux
Reproduction Steps
Using default credential provider
Observed Behavior
With kafkajs and aws-msk-iam-sasl-signer-js
SASL OAUTHBEARER authentication failed: Cannot change principals during re-authentication from IAM.arn:aws:sts::xxxx:assumed-role/xxxx/276eccf3-xxxx-role: IAM.arn:aws:sts::xxxx:assumed-role/xxxx/b489ee67-xxxx-role
Expected Behavior
Avoid KafkaJSNonRetriableError when refreshing credentials
Possible Solution
Refreshing of credentials before Oauth token generation
Additional Information/Context
No response