pulumi / actions

Deploy continuously to your cloud of choice, using your favorite language, Pulumi, and GitHub!
Apache License 2.0
260 stars 73 forks source link

Pulumi action unable to access AWS EKS resources #1295

Closed mjmottram closed 1 month ago

mjmottram commented 1 month ago

What happened?

Running either pulumi/actions@v5 or pulumi/actions@v4, our CI deployment has started failing when attempting to connect to EKS, with warnings like:

warning: configured Kubernetes cluster is unreachable: unable to load schema information from the API server: the server has asked for the client to provide credentials

I don't think we've made any local changes that could have caused this, and deployments were successful up until a few days ago.

We use an s3 bucket to self-host our pulumi config in a main AWS account, with separate accounts for staging and production. The deployments therefore use both an AWS access key for the main and the deployment environment (the latter set as either the staging or production AWS profile) and then:

config:
  aws:profile: staging

set in our Pulumi.staging.yaml config. With a CI action that looks like this:

jobs:
  job-1:
    ...

    - steps
      ...

      - name: Set root AWS profile
        run: |
          aws configure set aws_access_key_id {{ secrets.... }}
          aws configure set aws_secret_access_key {{ secrets.... }}
          aws configure set region eu-west-2
          aws configure set output json

      - name: Set staging AWS profile
        run: |
          aws configure set aws_access_key_id {{ secrets.... }} --profile staging
          aws configure set aws_secret_access_key {{ secrets.... }} --profile staging
          aws configure set region eu-west-2 --profile staging
          aws configure set output json --profile staging

      - name: pulumi action
        uses: pulumi/actions@v5
        with:
          command: up
          refresh: true
          stack-name: staging
          cloud-url: s3-bucket-address
          work-dir: ...
        env:
          PULUMI_CONFIG_PASSPHRASE: ...

Checking the AWS EKS logs, I can see access requests from both the staging AND the root account github CI robot user, but pulumi should only be using the staging account credentials during the deployment.

I can't see why the main account credentials would be used (certainly old deployments only use the staging credentials), unless this is a fallback after some other error I'm not aware of.

Example

Per config above, need to create an EKS cluster on one AWS account with the pulumi config hosted in a separate AWS account.

Output of pulumi about

From local, since cannot run this on CI:

CLI          
Version      3.128.0
Go Version   go1.22.5
Go Compiler  gc

Plugins
KIND      NAME    VERSION
language  nodejs  unknown

Host     
OS       darwin
Version  14.0
Arch     arm64

This project is written in nodejs: executable='/Users/matt/.nvm/versions/node/v20.10.0/bin/node' version='v20.10.0'

Dependencies:
NAME                VERSION
@pulumi/aws         6.51.1
@pulumi/kubernetes  4.18.1
@pulumi/pulumi      3.132.0
@pulumi/random      4.16.3
typescript          5.3.3

We've tried running default pulumi version (v3.133.0) from actions, as well as specifying v3.130.0 and v.135.0 in the action.

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

mjmottram commented 1 month ago

Leaving this open for now, but I think we've diagnosed the issue as being caused by an update to pulumi-eks, have filed an issue here.

justinvp commented 1 month ago

I'm going to close this as a duplicate of https://github.com/pulumi/pulumi-eks/issues/1426. But we can always re-open if it ends up being specific to the action.