Closed mjmottram closed 1 month ago
Leaving this open for now, but I think we've diagnosed the issue as being caused by an update to pulumi-eks, have filed an issue here.
I'm going to close this as a duplicate of https://github.com/pulumi/pulumi-eks/issues/1426. But we can always re-open if it ends up being specific to the action.
What happened?
Running either
pulumi/actions@v5
orpulumi/actions@v4
, our CI deployment has started failing when attempting to connect to EKS, with warnings like:I don't think we've made any local changes that could have caused this, and deployments were successful up until a few days ago.
We use an s3 bucket to self-host our pulumi config in a main AWS account, with separate accounts for staging and production. The deployments therefore use both an AWS access key for the main and the deployment environment (the latter set as either the
staging
orproduction
AWS profile) and then:set in our
Pulumi.staging.yaml
config. With a CI action that looks like this:Checking the AWS EKS logs, I can see access requests from both the staging AND the root account github CI robot user, but pulumi should only be using the staging account credentials during the deployment.
I can't see why the main account credentials would be used (certainly old deployments only use the staging credentials), unless this is a fallback after some other error I'm not aware of.
Example
Per config above, need to create an EKS cluster on one AWS account with the pulumi config hosted in a separate AWS account.
Output of
pulumi about
From local, since cannot run this on CI:
We've tried running default pulumi version (
v3.133.0
) from actions, as well as specifyingv3.130.0
andv.135.0
in the action.Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).