pulumi / pulumi-eks

A Pulumi component for easily creating and managing an Amazon EKS Cluster
https://www.pulumi.com/registry/packages/eks/
Apache License 2.0
171 stars 80 forks source link

Pulumi EKS v3.0.0 Release #1425

Open flostadler opened 6 days ago

flostadler commented 6 days ago

AWS recently announced the deprecation of two features used by default in Pulumi EKS: the aws-auth ConfigMap and the AL2 operating system. Pulumi EKS v3 addresses these deprecations, enhances the maintainability of the provider, and aligns it with EKS best practices. This release delivers significant improvements in flexibility and security, introducing new features to enhance your Kubernetes experience on AWS.

Current release version: v3.0.0-beta.1

New Features and Improvements

Support for Amazon Linux 2023 and Bottlerocket

We've expanded the operating system options for node groups in EKS v3 to address the upcoming deprecation of Amazon Linux 2 (AL2). You can now choose between Amazon Linux 2 (deprecated), Amazon Linux 2023 and Bottlerocket for your EKS nodes. This flexibility allows you to select the OS that best fits your workloads, security requirements, and compliance needs, while ensuring you're using a supported and actively maintained operating system. We've introduced a new operatingSystem property for node groups to facilitate this choice.

Access Entries for IAM Integration

AWS has introduced Access Entries as a new method for granting IAM principals access to Kubernetes resources. This approach relies solely on AWS resources for managing Kubernetes auth, replacing the deprecated aws-auth ConfigMap. You can now leverage Access Entries by setting the authenticationMode to API in your cluster configuration.

EKS Managed Addons

The EKS cluster components vpc-cni, coredns, and kube-proxy are now configured as EKS managed addons. This change simplifies management, especially for clusters with private API endpoints, and ensures that these critical components stay up to date automatically. Additionally it removes the dependency on kubectl, allowing pulumi-native management of clusters.

Cluster Autoscaler Integration

Pulumi EKS v3 introduces better support for the Kubernetes Cluster Autoscaler. A new ignoreScalingChanges parameter for node groups allows Pulumi to ignore external scaling changes, facilitating seamless integration with dynamic scaling solutions.

EKS Security Groups for Pods and Network Policies

We've added support for EKS security groups for pods and EKS Network Policies, providing more granular control over pod-to-pod and pod-to-external network communication within your EKS clusters.

NodeGroup component deprecation

The NodeGroup component uses the deprecated AWS Launch Configuration (see AWS docs). Launch Configurations do not support new instance types released after December 31, 2022 and starting on October 1, 2024, new AWS accounts will not be able to create launch configurations. Its successor, the NodeGroupV2 component is functionally equivalent and easier to operate because it does not use CloudFormation templates under the hood like NodeGroup did.

The default node group of the Cluster component has been updated to use the NodeGroupV2 component as well. Updates to the default node group will be done by first creating the new replacement nodes and then shutting down the old ones which will move pods to the new nodes. If you need to perform the update gracefully, please have a look at Gracefully upgrading node groups.

Migration Guide

To help you transition smoothly, we've prepared a migration guide with these key steps:

  1. Update node groups to use AL2023 or explicitly configure AL2 if needed.
  2. Replace the deprecated NodeGroup component with NodeGroupV2.
  3. Update your code to handle new output types for certain properties.
  4. Review and update your use of default security groups, which can now be disabled.

Please refer to our EKS v3 Migration Documentation for a detailed guide.

Makeshift commented 4 hours ago

Unsure if this is the correct place to post bug reports, but:

I'm trialling the new v3.0.0 beta with a Bottlerocket nodegroup. I've found that if you add a taint to a ManagedNodeGroup along with any bottlerocketSettings set, this requires the creation of custom userdata. The taint.effect is enforced to be ["NO_SCHEDULE" "NO_EXECUTE" "PREFER_NO_SCHEDULE"] by the aws:eks/nodeGroup:NodeGroup resource:

  aws:eks:NodeGroup (cluster-services):
    error: aws:eks/nodeGroup:NodeGroup resource 'cluster-services' has a problem: expected effect to be one of ["NO_SCHEDULE" "NO_EXECUTE" "PREFER_NO_SCHEDULE"], got NoSchedule. Examine values at 'cluster-services.taints'.

This isn't changed into the correct syntax before creating Bottlerocket userdata, resulting in this in the userdata:

    [settings.kubernetes.node-taints]
    "test-taint" = "cluster-services:NO_SCHEDULE"

which is invalid.

My current workaround is a transform that unserializes the base64 userdata and fixes it - not the nicest but it works:

new eks.ManagedNodeGroup('cluster-services', {
  taints: [{
    key: 'node.kubernetes.io/role',
    value: 'cluster-services',
    effect: 'NO_SCHEDULE'
  }],
  bottlerocketSettings: {
    settings: {
      'host-containers': {
        admin: {
          enabled: true
        }
      }
    }
  }
  operatingSystem: 'Bottlerocket',
  clusterName
}, {
  transforms: [(args) => {
    if (args.type === 'aws:ec2/launchTemplate:LaunchTemplate' && args.props.userData) {
      args.props.userData = Buffer.from(
        Buffer
          .from(args.props.userData as string, 'base64')
          .toString('utf-8')
          .replaceAll('NO_SCHEDULE', 'NoSchedule')
          .replaceAll('PREFER_NO_SCHEDULE', 'PreferNoSchedule')
          .replaceAll('NO_EXECUTE', 'NoExecute')
      ).toString('base64')
    }
    return args
  }]
})