pulumi / pulumi-eks

A Pulumi component for easily creating and managing an Amazon EKS Cluster
https://www.pulumi.com/registry/packages/eks/
Apache License 2.0
171 stars 80 forks source link

Add support for EBS CSI Driver #833

Open mresetar opened 1 year ago

mresetar commented 1 year ago

What happened?

Whilst deploying "hello-world" eks cluster following https://www.pulumi.com/blog/crosswalk-for-aws-1-0/

import * as pulumi from "@pulumi/pulumi";
import * as awsx from "@pulumi/awsx";
import * as eks from "@pulumi/eks";

// Grab some values from the Pulumi configuration (or use default values)
const config = new pulumi.Config();
const minClusterSize = config.getNumber("minClusterSize") || 3;
const maxClusterSize = config.getNumber("maxClusterSize") || 6;
const desiredClusterSize = config.getNumber("desiredClusterSize") || 3;
const eksNodeInstanceType = config.get("eksNodeInstanceType") || "t2.medium";
const vpcNetworkCidr = config.get("vpcNetworkCidr") || "10.0.0.0/16";

// Create a new VPC
const eksVpc = new awsx.ec2.Vpc("eks-vpc", {
    enableDnsHostnames: true,
    cidrBlock: vpcNetworkCidr,
});

// Create the EKS cluster
const eksCluster = new eks.Cluster("eks-cluster", {
    // Put the cluster in the new VPC created earlier
    vpcId: eksVpc.vpcId,
    // Public subnets will be used for load balancers
    publicSubnetIds: eksVpc.publicSubnetIds,
    // Private subnets will be used for cluster nodes
    privateSubnetIds: eksVpc.privateSubnetIds,
    // Change configuration values to change any of the following settings
    instanceType: eksNodeInstanceType,
    desiredCapacity: desiredClusterSize,
    minSize: minClusterSize,
    maxSize: maxClusterSize,
    // Do not give the worker nodes public IP addresses
    nodeAssociatePublicIpAddress: false,
    // Uncomment the next two lines for a private cluster (VPN access required)
    // endpointPrivateAccess: true,
    // endpointPublicAccess: false
});

// Export some values for use elsewhere
export const kubeconfig = eksCluster.kubeconfig;
export const vpcId = eksVpc.vpcId;

I've noticed that gp2 class is created (default one) but CSI driver is not installed.

Maybe this is out of scope for Pulumi EKS support but it would be nice to have CSI driver automatically (as much as possible) setup for EKS.

More information on the EBS CSI is available in the AWS docs.

I've manually installed CSI driver following https://github.com/kubernetes-sigs/aws-ebs-csi-driver and after this done PVC are successfully bound.

Steps to reproduce

pulumi up with sample EKS configuration from above. Create deployment with PVC. PVC will not be bound.

Expected Behavior

EBS volumes (gp2) created and PVC bound in EKS.

Actual Behavior

PVC are not bound.

Output of pulumi about

CLI
Version      3.48.0
Go Version   go1.19.2
Go Compiler  gc

Plugins
NAME        VERSION
aws         5.21.1
aws         5.10.0
awsx        1.0.0
command     0.6.0
docker      3.6.1
eks         1.0.0
kubernetes  3.20.2
nodejs      unknown

Host
OS       ubuntu
Version  20.04
Arch     x86_64

This project is written in nodejs: executable='/usr/local/bin/node' version='v16.17.1'

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

roothorp commented 1 year ago

Hi @mresetar, thanks for the issue! I think you're correct - the switch from in-tree storage plugins happened in 1.23 and I think we should provide the functionality within the EKS component to install and configure the EBS CSI. I've edited the title and added a link to the EKS docs; hopefully we can use this ticket to track the implementation. Hope that's ok!

Hopefully your solution of manually setting up the CSI driver is working for you, but you could alternatively make use of the Kubernetes provider to create it within Pulumi; it looks like AWS provide both a Kustomize directory and a Helm chart, both of which can be used in Pulumi Kubernetes. Hopefully this helps!

mchristen commented 1 year ago

Any update on when this will be implemented? Trying to piece together how to install this combined with the AWS documentation was a chore and this would save a lot of time for developers.

jhamman commented 1 year ago

@roothorp - While this feature is considered, I wonder if adding an example of using

the Kubernetes provider to create it within Pulumi; it looks like AWS provide both a Kustomize directory and a Helm chart, both of which can be used in Pulumi Kubernetes.

to the docs would be useful for folks. It took me quite a while to find this issue but when I did, it really helped unblock me!

klis commented 1 year ago

@mresetar you can use https://www.pulumi.com/registry/packages/aws/api-docs/eks/addon/ to install CSI driver with Pulumi. Sample code:

        new aws.eks.Addon(`eksAwsEbsCsiDriver`, {
            addonName: "aws-ebs-csi-driver",
            addonVersion: "v1.16.0-eksbuild.1",
            clusterName: cluster.core.cluster.name
        })
mresetar commented 1 year ago

Thanks, klis. Currently not managing the EKS cluster but if I come back to it I'll be sure to remember this. EBS CSI Driver add-on doc is located at https://aws-quickstart.github.io/cdk-eks-blueprints/addons/ebs-csi-driver/.

aws eks describe-addon-versions --addon-name aws-ebs-csi-driver --kubernetes-version 1.23 --query "addons[].addonVersions[].[addonVersion, compatibilities[].defaultVersion]" --output text

Returns available version for the k8s version. Currently, this would be v1.16.1-eksbuild.1 for 1.23.

LockedThread commented 1 year ago

Please review @roothorp

jkodroff commented 1 year ago

While the PR is in progress, I can confirm that the following code will successfully deploy the Airflow Helm chart (at least, which did not work before due to the CSI driver no longer being present on more recent versions of EKS). This is for K8s version 1.27:

const vpc = new awsx.ec2.Vpc("eks-airflow", {
  enableDnsHostnames: true,
});

// We need to explicitly specify this role until
// https://github.com/pulumi/pulumi-eks/issues/833 is resolved:
const instanceRole = new aws.iam.Role("instance-role", {
  assumeRolePolicy: JSON.stringify({
    Version: "2012-10-17",
    Statement: [
      {
        Action: "sts:AssumeRole",
        Principal: { Service: "ec2.amazonaws.com" },
        Effect: "Allow",
      },
    ],
  }),
});

const policyArns = [
  "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
  "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy",
  "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy",
  "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy",
];

policyArns.forEach((value, index) => new aws.iam.RolePolicyAttachment(`instance-role-policy-${index + 1}`, {
  policyArn: value,
  role: instanceRole.name,
}));

const cluster = new eks.Cluster(
  "eks-airflow", {
  vpcId: vpc.vpcId,
  publicSubnetIds: vpc.publicSubnetIds,
  privateSubnetIds: vpc.privateSubnetIds,
  desiredCapacity: 3,
  instanceType: "t3.medium",
  minSize: 3,
  maxSize: 6,
  nodeAssociatePublicIpAddress: false,
  instanceRole: instanceRole,
});

new aws.eks.Addon("ebs-csi-driver", {
  addonName: "aws-ebs-csi-driver",
  addonVersion: "v1.19.0-eksbuild.2",
  clusterName: cluster.core.cluster.name
});
matheusgr commented 1 year ago

@jkodroff, thank you for sharing your solution!

If anyone else is encountering the same issue, consider updating the driver version if you encounter any difficulties while using v1.19 ("v1.22.0-eksbuild.2" works for me).