aws / aws-cdk

The AWS Cloud Development Kit is a framework for defining cloud infrastructure in code
https://aws.amazon.com/cdk
Apache License 2.0
11.5k stars 3.84k forks source link

[aws-eks] Can't log into fresh EKS cluster with SAML mastersRole #6982

Closed dr3s closed 2 years ago

dr3s commented 4 years ago

I used the CDK to create an EKS cluster with an assumed role and cannot login even though I made a role that I can assume the master role. Unlike https://github.com/aws/aws-cdk/issues/3752 I set the mastersRole.

I followed the example here: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-eks-readme.html

Reproduction Steps

Initially I thought setting the mastersRole should be enough:

// admin role
const clusterAdmin = iam.Role.fromRoleArn(this, 'AdminRole',
     "arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team");

 const cluster = new eks.Cluster(this, 'KubeFlowCluster', {
      defaultCapacity: 3,
      defaultCapacityInstance: new ec2.InstanceType('t3.large'),
      mastersRole: clusterAdmin,
      vpc: vpc,
      vpcSubnets: [{ subnets: vpc.privateSubnets }],

    });

I thought that should also set up aws auth mapping in EKS but I have since added the following which also didn't help:

cluster.awsAuth.addMastersRole(clusterAdmin)

In fact this wasn't necessary and just added a duplicate master role entry but I wanted to illustrate what I tried.

Error Log

(base) ➜ kubeflow-eks git:(master) ✗ eksctl get cluster NAME REGION KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3 eu-west-1

(base) ➜ kubeflow-eks git:(master) ✗ eksctl get iamidentitymapping --cluster KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3 Error: getting auth ConfigMap: Unauthorized

Environment

Other

This is the CF template section generated by CDK for the awsauth:


"KubeFlowClusterAwsAuthmanifest4ABE9919": {
      "Type": "Custom::AWSCDK-EKS-KubernetesResource",
      "Properties": {
        "ServiceToken": {
          "Fn::GetAtt": [
            "awscdkawseksKubectlProviderNestedStackawscdkawseksKubectlProviderNestedStackResourceA7AEBA6B",
            "Outputs.KubeflowEksDevawscdkawseksKubectlProviderframeworkonEventA20B6922Arn"
          ]
        },
        "Manifest": {
          "Fn::Join": [
            "",
            [
              "[{\"apiVersion\":\"v1\",\"kind\":\"ConfigMap\",\"metadata\":{\"name\":\"aws-auth\",\"namespace\":\"kube-system\"},\"data\":{\"mapRoles\":\"[{\\\"rolearn\\\":\\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\\",\\\"username\\\":\\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\\",\\\"groups\\\":[\\\"system:masters\\\"]},{\\\"rolearn\\\":\\\"",
              {
                "Fn::GetAtt": [
                  "KubeFlowClusterDefaultCapacityInstanceRoleE883FDD5",
                  "Arn"
                ]
              },
              "\\\",\\\"username\\\":\\\"system:node:{{EC2PrivateDNSName}}\\\",\\\"groups\\\":[\\\"system:bootstrappers\\\",\\\"system:nodes\\\"]},{\\\"rolearn\\\":\\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\\",\\\"username\\\":\\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\\",\\\"groups\\\":[\\\"system:masters\\\"]}]\",\"mapUsers\":\"[]\",\"mapAccounts\":\"[]\"}}]"
            ]
          ]
        },

It may not be clear but it seems the config map isn't correct. It appears that the mapRoles array is array in a string instead of an array object.

apiVersion: v1
data:
  mapAccounts: '[]'
  mapRoles: '[{"rolearn":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","username":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","groups":["system:masters"]},{"rolearn":"arn:aws:iam::674300753731:role/KubeflowEks-Dev-KubeFlowClusterDefaultCapacityInst-1SBZV2PTF6QIH","username":"system:node:{{EC2PrivateDNSName}}","groups":["system:bootstrappers","system:nodes"]},{"rolearn":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","username":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","groups":["system:masters"]}]'
  mapUsers: '[]'
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"mapAccounts":"[]","mapRoles":"[{\"rolearn\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"username\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"groups\":[\"system:masters\"]},{\"rolearn\":\"arn:aws:iam::674300753731:role/KubeflowEks-Dev-KubeFlowClusterDefaultCapacityInst-1SBZV2PTF6QIH\",\"username\":\"system:node:{{EC2PrivateDNSName}}\",\"groups\":[\"system:bootstrappers\",\"system:nodes\"]},{\"rolearn\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"username\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"groups\":[\"system:masters\"]}]","mapUsers":"[]"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"aws-auth","namespace":"kube-system"}}
  creationTimestamp: "2020-03-08T14:19:08Z"
  name: aws-auth
  namespace: kube-system
  resourceVersion: "4538"
  selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
  uid: c65c4c0b-6147-11ea-a6b1-02aa720c17c2

This is :bug: Bug Report

eladb commented 4 years ago

Needs a repro

dr3s commented 4 years ago

kubeflow-eks.zip

dr3s commented 4 years ago

Also see Case ID 6860089261

eladb commented 4 years ago

mapRoles is expected to be an array encoded inside a string.

I am unable to reproduce this:

  1. Created a new IAM role with a trust policy that allowed me to assume it (e.g. trust the current account).
  2. Reference this role as mastersRole: Role.fromArn(...).
  3. Deploy the cluster.

Then, execute the following command to update k8s configuration:

aws eks update-kubeconfig --name <CLUSTER-NAME> --region us-east-2 --role-arn <ROLE-ARN>

Then:

kubectl get configmap/aws-auth -n kube-system -o yaml

Returns the expected aws-auth configuration.

I am closing for now. Reopen when you have additional information.

dr3s commented 4 years ago

The string vs array comment was from Amazon premium support. They have since said they were mistaken.

Have you looked at the code I attached and its output of cdk synth?

The steps I have to do are different and maybe that's related:

  1. Using SAML roles and sts. I cannot use an IAM user and must log in with sso and get an assumed role.
  2. It is this assumed role that I'm setting as the master role.
  3. I cannot do a config map get despite setting up the k8s context with the same role. The only way I'm to use kubectl or eksctl is to assume the role created with the cluster by the cdk.

AWS said it was a problem with the config map but now they have recanted. They instructed me to open this issue. I really have no idea but the code I attached is pretty simple and does not work for the flow I described.

eladb commented 4 years ago

What is the output you are getting when you run kubectl get all?

dr3s commented 4 years ago

It's in the support case but effectively access denied

On Sun, Apr 12, 2020, 12:28 PM Elad Ben-Israel notifications@github.com wrote:

What is the output you are getting when you run kubectl get all?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/aws/aws-cdk/issues/6982#issuecomment-612641626, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABDNW6NUJVEH5G3DRPEV73RMHT3ZANCNFSM4LS5RNEA .

eladb commented 4 years ago

Can you paste the aws eks update-kubeconfig command you are executing?

dr3s commented 4 years ago

https://console.aws.amazon.com/support/home?region=eu-west-1#/case/?displayId=6860089261&language=en

(base) ➜ kubeflow-eks git:(master) ✗ kubectl get configmap aws-auth -n kube-system -o yaml
apiVersion: v1 data: mapAccounts: '[]' mapRoles: '[{"rolearn":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","username":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","groups":["system:masters"]},{"rolearn":"arn:aws:iam::674300753731:role/KubeflowEks-Dev-KubeFlowClusterDefaultCapacityInst-1SBZV2PTF6QIH","username":"system:node:{{EC2PrivateDNSName}}","groups":["system:bootstrappers","system:nodes"]},{"rolearn":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","username":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","groups":["system:masters"]}]' mapUsers: '[]' kind: ConfigMap metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","data":{"mapAccounts":"[]","mapRoles":"[{\"rolearn\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"username\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"groups\":[\"system:masters\"]},{\"rolearn\":\"arn:aws:iam::674300753731:role/KubeflowEks-Dev-KubeFlowClusterDefaultCapacityInst-1SBZV2PTF6QIH\",\"username\":\"system:node:{{EC2PrivateDNSName}}\",\"groups\":[\"system:bootstrappers\",\"system:nodes\"]},{\"rolearn\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"username\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"groups\":[\"system:masters\"]}]","mapUsers":"[]"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"aws-auth","namespace":"kube-system"}} creationTimestamp: "2020-03-08T14:19:08Z" name: aws-auth namespace: kube-system resourceVersion: "4538" selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth uid: c65c4c0b-6147-11ea-a6b1-02aa720c17c2

This is the CF template section generated by CDK for the awsauth:

"KubeFlowClusterAwsAuthmanifest4ABE9919": { "Type": "Custom::AWSCDK-EKS-KubernetesResource", "Properties": { "ServiceToken": { "Fn::GetAtt": [ "awscdkawseksKubectlProviderNestedStackawscdkawseksKubectlProviderNestedStackResourceA7AEBA6B", "Outputs.KubeflowEksDevawscdkawseksKubectlProviderframeworkonEventA20B6922Arn" ] }, "Manifest": { "Fn::Join": [ "", [ "[{\"apiVersion\":\"v1\",\"kind\":\"ConfigMap\",\"metadata\":{\"name\":\"aws-auth\",\"namespace\":\"kube-system\"},\"data\":{\"mapRoles\":\"[{\\"rolearn\\":\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\",\\"username\\":\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\",\\"groups\\":[\\"system:masters\\"]},{\\"rolearn\\":\\"", { "Fn::GetAtt": [ "KubeFlowClusterDefaultCapacityInstanceRoleE883FDD5", "Arn" ] }, "\\",\\"username\\":\\"system:node:{{EC2PrivateDNSName}}\\",\\"groups\\":[\\"system:bootstrappers\\",\\"system:nodes\\"]},{\\"rolearn\\":\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\",\\"username\\":\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\",\\"groups\\":[\\"system:masters\\"]}]\",\"mapUsers\":\"[]\",\"mapAccounts\":\"[]\"}}]" ] ] },

(base) ➜ kubeflow-eks git:(master) ✗ aws sts get-caller-identity
{ "Account": "674300753731", "UserId": "AROAIXSWYIIDLDMHO5GPO:amarch", "Arn": "arn:aws:sts::674300753731:assumed-role/aws-vbumodelscoring-management-team/amarch" }

(base) ➜ kubeflow-eks git:(master) ✗ aws eks update-kubeconfig --name KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3 --region eu-west-1 --role-arn arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team --profile vbumodelscoring-admin Updated context arn:aws:eks:eu-west-1:674300753731:cluster/KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3 in /Users/amarch/.kube/config

(base) ➜ kubeflow-eks git:(master) ✗ aws-iam-authenticator token -i KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3 {"kind":"ExecCredential","apiVersion":"client.authentication.k8s.io/v1alpha1","spec":{},"status":{"expirationTimestamp":"2020-03-08T15:13:27Z","token":"k8s-aws-v1.blahblahblah}}

Verification of the token works but yet I cannot login to EKS: (base) ➜ kubeflow-eks git:(master) ✗ kubectl get nodes
error: You must be logged in to the server (Unauthorized) (base) ➜ kubeflow-eks git:(master) ✗ kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. error: You must be logged in to the server (Unauthorized) (base) ➜ kubeflow-eks git:(master) ✗ kubectl cluster-info dump error: You must be logged in to the server (Unauthorized)

(base) ➜ kubeflow-eks git:(master) ✗ eksctl get cluster NAME REGION KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3 eu-west-1

(base) ➜ kubeflow-eks git:(master) ✗ eksctl get iamidentitymapping --cluster KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3 Error: getting auth ConfigMap: Unauthorized

(base) ➜ kubeflow-eks git:(master) ✗ eksctl get fargateprofile --cluster KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3 NAME SELECTOR_NAMESPACE SELECTOR_LABELS POD_EXECUTION_ROLE_ARN SUBNETS KubeFlowClusterfargateprofileD-e2c227b8dbf1453db48021da16e9ebb4 default arn:aws:iam::674300753731:role/KubeflowEks-Dev-KubeFlowClusterfargateprofileDefau-1JAXROWG84BPR subnet-cdd02aaa,subnet-97f731de,subnet-a33137fb

farshadniayeshpour commented 4 years ago

I had the same issue. I deleted the cluster and redeployed and I could log into the cluster with kubectl.

dr3s commented 4 years ago

If you have an example of a CDK stack the works with assumedroles from SAML as described in the flow here, I would be very grateful: https://github.com/aws/aws-cdk/issues/6982#issuecomment-612616173

I have tried a lot of variations and haven't found a solution other than assuming the role the stack creates for the cluster.

farshadniayeshpour commented 4 years ago

@dr3s I can email you the script

eladb commented 4 years ago

@FarshadNiayesh Would be great if you can share some details for future generations...

farshadniayeshpour commented 4 years ago

@dr3s @eladb So this is the code I am using:

    def eks_iam_roles(self):
    cluster_admin_role = iam.Role(self, f"cluster-admin-role-{self.ENVIRONMENT}",
                                  role_name=f"KubernetesAdmin-{self.ENVIRONMENT}",
                                  assumed_by=iam.AccountRootPrincipal())

    admin_policy_statement = iam.PolicyStatement(resources=[cluster_admin_role.role_arn],
                                                 actions=[
                                                     "sts:AssumeRole"],
                                                 effect=iam.Effect.ALLOW)

    assume_EKS_admin_role = iam.ManagedPolicy(self, f"assume-eks-admin-role-{self.ENVIRONMENT}",
                                              managed_policy_name=f"assume-KubernetesAdmin-role-{self.ENVIRONMENT}")

    assume_EKS_admin_role.add_statements(admin_policy_statement)
    eks_cluster_role = iam.Role(self, f"eks-role-{self.ENVIRONMENT}",
                                assumed_by=iam.ServicePrincipal(
                                    "eks.amazonaws.com"),
                                managed_policies=[iam.ManagedPolicy.from_aws_managed_policy_name("AmazonEKSServicePolicy"),
                                                  iam.ManagedPolicy.from_aws_managed_policy_name("AmazonEKSClusterPolicy")])

    eks_master_role = iam.Role(self, f"eks-cluster-admin-{self.ENVIRONMENT}",
                               assumed_by=iam.AccountRootPrincipal())

    return cluster_admin_role, eks_cluster_role, eks_master_role

def eks_cluster(self, cluster_name, eks_master_role, eks_cluster_role, vpc=None, subnets=None, security_group=None, default_capacity=0, default_capacity_instance="r5.large"):

    if vpc:
        eks_cluster = eks.Cluster(self, f"{os.environ['APP_NAME']}-cluster-{self.ENVIRONMENT}",
                                  default_capacity=default_capacity,
                                #   default_capacity_instance=ec2.InstanceType(default_capacity_instance),
                                  kubectl_enabled=True,
                                  cluster_name=f"{cluster_name}-{self.ENVIRONMENT}",
                                  masters_role=eks_master_role,
                                  role=eks_cluster_role,
                                #   security_group = eks_security_group, 
                                  vpc=vpc,
                                  output_cluster_name= True, 
                                  output_masters_role_arn = True, 
                                  vpc_subnets=[ec2.SubnetSelection(subnets=subnets)])
    else:
        eks_cluster = eks.Cluster(self, f"{os.environ['APP_NAME']}-cluster-{self.ENVIRONMENT}",
                                  default_capacity=default_capacity,
                                  default_capacity_instance = ec2.InstanceType(default_capacity_instance),
                                  default_capacity_type=eks.DefaultCapacityType.NODEGROUP,
                                  masters_role=eks_master_role,
                                  output_cluster_name=True,
                                  output_config_command=True,
                                  output_masters_role_arn=True,
                                  role=eks_cluster_role,
                                #   security_group=eks_security_group
                                  ## if you want to create public load balancers, this must include public subnets.
                                #   vpc_subnets=[ec2.SubnetSelection(subent_type=ec2.SubnetType.PRIVATE)]
                                  )

    return eks_cluster`

    cluster_admin_role, eks_cluster_role, eks_master_role  = self.eks_iam_roles()

    # Creates the kubernetes cluster
    eks_cluster = self.eks_cluster(cluster_name="rapid-prototyping-tool-cluster",
                                  eks_master_role=eks_master_role,
                                  eks_cluster_role=eks_cluster_role,
                                #   vpc=rpt_vpc,
                                #   subnets=private_subnets,
                                #   security_group=eks_sg
                                  )
    ## Add managed nodegroup to this Amazon EKS cluster.
    ## This method will create a new managed nodegroup and add into the capacity.
    eks_cluster.add_nodegroup(
        id='managed-nodegroup', 
        desired_size=int(os.environ["APP_DESIRED_CAPACITY"]),
        disk_size=int(os.environ["APP_DISK_SIZE"]), 
        instance_type=ec2.InstanceType(os.environ["APP_INSTANCE_TYPE"]), 
        max_size=int(os.environ["APP_MAX_CAPACITY"]), 
        min_size=int(os.environ["APP_MIN_CAPACITY"]), 
        nodegroup_name=f'eks-{os.environ["APP_NAME"]}-nodegroup', 
        remote_access=eks.NodegroupRemoteAccess(ssh_key_name=f'rpt-production-key-{self.ENVIRONMENT}', 
        # source_security_groups=[eks_sg]
        ),
        subnets = ec2.SubnetSelection(subnets=eks_cluster.vpc.private_subnets)
        )
    aws_auth = eks.AwsAuth(self, 'awsAuthId', cluster=eks_cluster)

    aws_auth.add_masters_role(cluster_admin_role, username=f"k8s-cluster-admin-user-{self.ENVIRONMENT}")`

Is this something you were looking for?

After the stack is deployed I just use the aws eks kubeconfig update command with the -r option set to the proper role.

dr3s commented 4 years ago

Thanks @FarshadNiayesh. I don't know how yours is different than what I wrote above except the role you are using.

My example is specifically with using an assumed role via SAML that is already created. I'm loading it in the cdk via its ARN.

You seem to be creating a role in CDK for the cluster. This should be similar to the role that's created by default and assigned to the cluster nodes. I don't have any issue assuming this role and managing the cluster, so I wouldn't expect that I would have an issue with your stack.

I'll give it a try but I don't think it addresses my root issue.

dr3s commented 4 years ago

got it narrowed down. this works:

const clusterAdmin = new iam.Role(this, `eks-cluster-admin-${id}`, {
   assumedBy: new iam.AccountRootPrincipal(),
});

const cluster = new eks.Cluster(this, "KubeFlowCluster", {
  defaultCapacity: 3,
  defaultCapacityInstance: new ec2.InstanceType("t3.large"),
  mastersRole: clusterAdmin,
  vpc: vpc,
  vpcSubnets: [{ subnets: vpc.privateSubnets }],
});

this doesn't work:


  const clusterAdmin = iam.Role.fromRoleArn(
      this,
      `adminRole-${id}`,
      "arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team"
    );

    const cluster = new eks.Cluster(this, "KubeFlowCluster", {
      defaultCapacity: 3,
      defaultCapacityInstance: new ec2.InstanceType("t3.large"),
      mastersRole: clusterAdmin,
      vpc: vpc,
      vpcSubnets: [{ subnets: vpc.privateSubnets }],
    });

I think that it has to do with the role being SAML. I don't know why the Trusted Entities of the role would make a difference. I'll update the title of the issue to be more specific but I'm at a loss. It's possible that this has more to do with EKS than the CDK.

dr3s commented 4 years ago

Based upon experimentation, I have found it works if I do two things:

const clusterAdmin = new iam.Role(this, `eks-cluster-admin-${id}`, {
      assumedBy: new iam.AccountRootPrincipal(),
    });

const cluster = new eks.Cluster(this, "FeastCluster", {
      defaultCapacity: 0,
      mastersRole: clusterAdmin,
      vpc: vpc,
      vpcSubnets: [{ subnets: vpc.privateSubnets }],
    });

cluster.awsAuth.addMastersRole(clusterAdmin);

cluster.addNodegroup("NGDefault", {
      instanceType: new ec2.InstanceType("t3.large"),
      diskSize: 100,
      minSize: 3,
      maxSize: 6,
    });
otaviomacedo commented 2 years ago

@dr3s this seems like an EKS problem. Can you provide us with the EKS cluster arn, so that the EKS team can investigate this further?

github-actions[bot] commented 2 years ago

This issue has not received a response in a while. If you want to keep this issue open, please leave a comment below and auto-close will be canceled.

boscowitch commented 4 months ago

This is still a bug and I think it is from CDK, since simply setting up the cluster with terraform with the access key tokens ect. from my SAML sso users assumed role, just works and i have console UI access to resources of the cluster without any additional role switching/assuming ect (btw terraform took me 1 week less work up till now and the cdk version still not resolved or equivalent to it :( since cdk also seems to not be able to tag existing subnets but thats another story).

Using a newly created role with (and the assume policy ect from above) works:

  const clusterAdmin = new Role(this, `eks-cluster-admin-${id}`, {
      assumedBy: new iam.AccountRootPrincipal(),
    });
....

however thats more like a hacky workaround for sso logins since in the ui console and cli switching is a bit of a nuisance and handling of long arns....

especially since its clearly possible with terraform and its eks module (or manual setup of eks).

why can't cdk not simply assume the sso role correctly or just use the tokens I provide directly, this would make the use of cdk/eks with SSO extremly more simple.

especially since our SSO role already was created for the purpose of EKSadmin: arn:aws:iam::XXXXXXXXXXXX:role/aws-reserved/sso.amazonaws.com/eu-central-1/AWSReservedSSO_EKSAdmin_XXXXXXXXXXXXXXXX

PS: companies seem to employ sso for aws accounts more and more and a centalized management of roles/subnets ect. this makes this even more important since the dev/deployment time gets increased immensly with this lacking support.