pulumi / pulumi-eks

A Pulumi component for easily creating and managing an Amazon EKS Cluster
https://www.pulumi.com/registry/packages/eks/
Apache License 2.0
171 stars 80 forks source link

The kubeconfig generated is missing the region argument #1038

Open bryantbiggs opened 8 months ago

bryantbiggs commented 8 months ago

What happened?

I am trying to construct a Kubernetes provider thats suitable for deploying Helm charts onto an EKS cluster. However, I am getting a cluster not found error because the kubeconfig generated by the Pulumi EKS provider does not contain the region flag

Example

Use this projects https://github.com/pulumi/pulumi-eks/tree/master/examples/aws-go-eks-helloworld example and inspect the generated kubeconfig - you will see it does not contain the --region <region> argument

Output of pulumi about

x

Additional context

This is what the aws eks update-kubeconfig --name <name> command generates

# truncated for brevity
exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - <region>
      - eks
      - get-token
      - --cluster-name
      - <name>
      - --output
      - json
      command: aws

And this is what the Pulumi kubeconfig generates:

# truncated for brevity
exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - eks
      - get-token
      - --cluster-name
      - <name>
      command: aws

Contributing

Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

bryantbiggs commented 8 months ago

potentially related to https://github.com/pulumi/pulumi-eks/issues/896#issue-1823014975 - there are no good examples that show how to generate a provider that can be passed to Helm or Kubernetes resources correctly

mjeffryes commented 8 months ago

Thanks for reporting this @bryantbiggs. I suspect we're not exporting the region because we pick it up from the environment or config so a Pulumi K8s program doesn't need the region in the kubeconfig, but it does seem like a meaningful omission if the user intends to use the config with other tools!

To your second comment: It's true that there's not a good in-repo example for generating a k8s provider from an eks cluster resource, but you can find some instructions in our docs (eg. third code block in this section: https://www.pulumi.com/docs/clouds/aws/guides/eks/#provisioning-a-new-eks-cluster). There's also examples of this in our pulumi/examples repo (https://github.com/pulumi/examples/blob/master/aws-ts-eks-distro/index.ts) We're thinking of renaming the examples folder in the provider repos since these are actually used for e2e testing, not really as examples.

bryantbiggs commented 8 months ago

because we pick it up from the environment or config so a Pulumi K8s program doesn't need the region in the kubeconfig, but it does seem like a meaningful omission if the user intends to use the config with other tools!

I don't think this is quite accurate. This is delegating the token retrieval to the awscli, so either you need to explicitly tell the CLI command which region to query the cluster, or you leave that to the normal awscli lookup options. Its this second part which is worrisome from an IaC perspective because it needs to be reproducible across a number of different contexts (executing pulumi up locally, from within a CI process, etc.). My default credentials profile might use us-east-1 as the default region, but how do I tell Pulumi to connect to the cluster it created in us-west-2? Or similarly, I may have AWS_DEFAULT_REGION=eu-west-1 set for some odd reason. I think passing the region from Pulumi down to the kubeconfig is the only way users can ensure the right cluster is queried - there are several ways this can be accomplished, the key point being a direct, explicit relationship between the Pulumi context and the command arguments

gunzy83 commented 6 months ago

@bryantbiggs we ran into this problem early on when I began using discrete AWS provider objects that pointed to a regionless profile (inside our pulumi project) and the region is configured via our standard config variables so multi-region becomes a breeze. I read through the code and found exactly what you have, the region is not passed through and the JSON kubeconfig object does not have enough information to generate credentials when deploying k8s resources post cluster creation (eg namespaces, cluster roles, bindings and in our case Teleport deployment, all further deploys are done through that).

Our solution used transformations (code is a little sloppy but we want to get rid of it so we are leaving it as is while it is working):

export const ensureKubeConfigHasAwsRegion: pulumi.ResourceTransformation = (
  args: pulumi.ResourceTransformationArgs
): pulumi.ResourceTransformationResult | undefined => {
  if (args.type === 'pulumi:providers:kubernetes' || args.type === 'eks:index:VpcCni') {
    // eslint-disable-next-line @typescript-eslint/no-explicit-any
    const kubeConfig: pulumi.Output<any> = args.props['kubeconfig']
    const newKubeConfig = addRegionToKubeConfig(kubeConfig)
    args.props['kubeconfig'] = newKubeConfig
    return {
      props: args.props,
      opts: args.opts,
    }
  }
  return undefined
}

export const addRegionToKubeConfig = (kubeConfig: pulumi.Output<any>) => {
  const newKubeConfig = pulumi.all([kubeConfig, awsRegion]).apply(([contents, region]) => {
    let configObj: any
    if (typeof contents == 'object') {
      configObj = JSON.parse(JSON.stringify(contents))
    } else {
      configObj = JSON.parse(contents)
    }
    const envs = configObj['users'][0]['user']['exec']['env']
    envs.push({
      name: 'AWS_REGION',
      value: region,
    })
    configObj['users'][0]['user']['exec']['env'] = envs
    return JSON.stringify(configObj)
  })
  return newKubeConfig
}

Then apply it where required:

const eksCluster = new eks.Cluster(
  'eks-cluster',
  {
    ...clusterConfig,
  },
  { parent: this, provider: awsProvider, transformations: [ensureKubeConfigHasAwsRegion] }
)

Separate function needed for output (we are deprecating this in our stack):

export = {
  'kube-config': addRegionToKubeConfig(eksCluster.kubeConfig),
}

We are likely going to remove the pulumi/eks parent resources and just manage the underlying pulumi/aws and k8s resources ourselves going forward (this issue is only one of many reasons why). Hope this helps.