aws / aws-cdk

The AWS Cloud Development Kit is a framework for defining cloud infrastructure in code
https://aws.amazon.com/cdk
Apache License 2.0
11.4k stars 3.79k forks source link

[aws-eks] Patterns Module #4955

Open arhea opened 4 years ago

arhea commented 4 years ago

Similar to ECS Patterns, I am interested in contributing EKS Patterns to make deploying and configuring EKS clusters simple. This includes deploying K8s native features such as Cluster Autoscaler and the Kubernetes Dashboard. The goal is to offer a similar experience to eksctl.

I propose the addition of a new module called eks-patterns that defines common patterns. Initially, I recommend we start with the modules that can be extracted from the AWS Documentation.

Use Case

eksctl is a powerful tool for provisioning and managing EKS clusters, however, it abstracts a lot of the complexity and makes it difficult for administrators to audit or adjust the code. By using the CDK, we can create reproducable and auditable Cloudformation stacks that can be version controlled for customers looking for more fine grained visibility. Long term, I am looking to develop "compliant" abstractions for compliance frameworks such as FedRAMP, PCI, etc.

Proposed Solution

I have implemented two references here:

Other

N/A


This is a :rocket: Feature Request

eladb commented 4 years ago

This is something we would be very interested in. Both of these I think can go into the core eks module. Happy to take in those contributions. They look very high quality. Feel free to submit PRs!

arhea commented 4 years ago

Awesome, working on automated tests. Then I'll submit a PR. Thanks for the feedback!

dodtsair commented 3 years ago

One other eks pattern that would be important is the setup of the ingress controller (not the ingress itself). You can see in #10347 I had to put a lot of work in to figuring out what eksctl was doing in order to reproduce this in cdk. Once I had a full solution it generally involved pulling in the https://github.com/kubernetes-sigs/aws-alb-ingress-controller/tree/master/docs/examples repo as a dependency. Then I used files like alb-ingress-controller.yaml, iam-policy.json, rbac-role.yaml. I didn't really need to change these files, so I didn't want to copy them into my codebase.

Where changes were needed, for example putting the cluster name into a kubernetes YAML file, I loaded the yaml into a javascript object and manipulated that javascript object.

Particularly hard was getting the integration between kubernetes and IAM so that kubernetes could have the ALB provisioned.

This is a load of work that I imagine would be needed for most EKS clusters, and the work is fairly straightforward, not a bunch of reason for customer customization. I think it would be a great fit for EKS patterns.

I'll share some code snippets of what I was doing.

dodtsair commented 3 years ago

Some of the basic steps I went trhough to get he ALB working:

Start with a cluster

const cluster = new eks.FargateCluster(this, 'cluster', {...})

We'll need the clusterId, which isn't directly exposed. Extract it from the oidc URL

        //Example URL https://oidc.eks.us-west-2.amazonaws.com/id/B01EF2EC7AC85DCEED81633BDA4ED90A
        const clusterId = Fn.select(4, Fn.split('/', fargateCluster.clusterOpenIdConnectIssuerUrl))

For EKS to create the ALB resources in AWS it needs to assume an IAM role via OIDC integration. Define that role (follows pattern documented by eksctl)

        const federatedPrincipal = new iam.FederatedPrincipal(
            cluster.openIdConnectProvider.openIdConnectProviderArn,
            {
                StringEquals: new CfnJson(this, "FederatedPrincipalCondition", {
                    value: {
                        [`oidc.eks.${vpc.env.region}.amazonaws.com/id/${clusterId}:aud`]: "sts.amazonaws.com",
                        [`oidc.eks.${vpc.env.region}.amazonaws.com/id/${clusterId}:sub`]: "system:serviceaccount:kube-system:${seviceAccountName}"
                    }
                })
            }, "sts:AssumeRoleWithWebIdentity")

A few notes 1) seviceAccountName should be the same as the service account in the kubernetes cluster. This is defined in rbac-role.yaml as alb-ingress-controller 2) CfnJson is needed because we have tokens in the keys of the javascript object.

Now associate the principal to an IAM Role:

 const iamRole = new iam.Role(this, 'iam-role', {
            assumedBy: federatedPrincipal
        })

At this time the role has no permissions, it cannot do anything. The alb-ingress-controller repo has an example file that documents all the permissions kubernetes will need in order to create the ALB, rather then reproducing that file we'll pull it via require

Include the alb ingress controller to gain access to documentation files that contain standard yaml files for kubernetes

  "devDependencies": {
     "aws-alb-ingress-controller": "git://github.com/kubernetes-sigs/aws-alb-ingress-controller.git#v1.1.8",
     ...
   }

Now pull in the iam-policy.json and use it to create a new IAM Managed Policy

        const policyDocument = iam.PolicyDocument.fromJson(require('aws-alb-ingress-controller/docs/examples/iam-policy.json'))
        const managedPolicy = new iam.ManagedPolicy(this, 'managed-policy', {
            document: policyDocument
        })

Now associate the managed policy with the IAM so that kubernetes has the permissions needed to create the ALB

iamRole.addManagedPolicy(managedPolicy)

We are going to start loading kubernetes manifest into the EKS cluster, but some of the require slight modification. Again we could copy the sample code into our repo, or we could make the one change after we load it. I'll do the latter. This function I'll use to load the file and make the changes needed:

        let load = function (path, filter) {
            const absolutePath = require.resolve(path);
            const yamlText = readFileSync(absolutePath, 'utf8')
            const configs = yaml.safeLoadAll(yamlText);
            return filter ? filter(configs) : filter
        };

First up alb-ingress-controller.yaml works great, in my case I was using a fargate cluster which is more limited. I need to provide cluster name, vpc id, and region when creating the ingress controller in kubernetes.

I know in the configuration file that there is only one config, and that there is one and only one spec.template.spec.containers. However I still use forEach and Map because I do not like assuming there is only one entry and using [0]. Key point here is that we need to add to container.args and pass in the additional parameters. Everything else is the same.

        let ingressController = load('aws-alb-ingress-controller/docs/examples/alb-ingress-controller.yaml', function (configs) {
            configs.forEach((config) => {
                config.spec.template.spec.containers = config.spec.template.spec.containers.map((container) => {
                    return Object.assign({}, container, {
                        args: [
                            '--cluster-name=' + cluster.clusterName,
                            '--aws-vpc-id=' + vpc.vpcId,
                            '--aws-region=' + vpc.env.region,
                            ...container.args]
                    })
                })
            })
            return configs;

        });

Next we use the sample's rbac-role.yaml. This will create the service account, role, and role binding in kubernetes. Most of this works fine. Except the service account needs to be created with the ARN of the role we created above. This file actually contains several manifest, we want the one with the ServiceAccount. We'll then update the annotations with the ARN of the role.

        let rbac = load('aws-alb-ingress-controller/docs/examples/rbac-role.yaml', (configs) => {
            const serviceAccount = configs.find(config => config.kind === 'ServiceAccount')
            serviceAccount.metadata.annotations = Object.assign({}, serviceAccount.metadata.annotations, {
                'eks.amazonaws.com/role-arn': iamRole.roleArn
            })
            return configs;
        });

Then you dump the manifests into your cluster:

        new eks.KubernetesManifest(this, 'rbac-manifest', {
            cluster,
            manifest: rbac
        });
        new eks.KubernetesManifest(this, 'ingress-controller-manifest', {
            cluster,
            manifest: ingressController
        })

Hopefully this will help someone setup EKS + Ingress in the CDK. If I have time later I'll create some repos like @arhea .

vsetka commented 3 years ago

Is there any progress on this front? If no construct/pattern will be provided, a full working example of how to accomplish this in CDK would be greatly appreciated.

iliapolo commented 3 years ago

@vsetka We are not actively working on this. What specific example are you looking for?

vsetka commented 3 years ago

@iliapolo I'd personally be interested in a production grade setup (with appropriate service roles, IAM mapping) with AWS load balancer ingress controller and external DNS (with Route53). I think this is something a lot of people need in one form or another.

iliapolo commented 3 years ago

@pahud Wondering if you can help out here?

adambro commented 2 years ago

I've discovered an AWS quickstart that covers (some) of the cases asked here. Actually it covers quite a lot, whole EKS setup with necessary addons, so check it out: https://github.com/aws-quickstart/quickstart-eks-cdk-python