Open mjmottram opened 1 month ago
I've just seen the change that caused this. It's no clear to me whether it's a bug that pulumi is not picking up the expected profile from aws:profile
in our stack config, the comments in the linked change suggest we should set our AWS_PROFILE env var, but this would interfere with the self-hosted pulumi config on s3 in another account.
@mjmottram I think you should be able to use the getKubeConfig({ profileName: 'my-profile' })
method which allows you to specify a specific profile to use.
Does this work for your use case?
What happened?
Possibly related to https://github.com/pulumi/pulumi-eks/issues/1038, running with
and a deployment containing
Our cluster config contains the AWS profile that was used when running
pulumi up
via theaws:profile
parameter in our pulumi stack config.Running with
The AWS profile is missing from the environment. Since our setup includes a main AWS account where we self-host our pulumi config on S3, and then separate AWS accounts for staging and production, this causes our subsequent deployment pipelines to attempt to access EKS using the default (i.e. main) AWS account.
Example
See description
Output of
pulumi about
See description
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).