kubernetes-sigs / aws-iam-authenticator

A tool to use AWS IAM credentials to authenticate to a Kubernetes cluster
Apache License 2.0
2.19k stars 418 forks source link

error: You must be logged in to the server (Unauthorized) -- same IAM user created cluster #174

Closed mrichman closed 3 years ago

mrichman commented 5 years ago

My AWS CLI credentials are set to the same IAM user which I used to create my EKS cluster. So why would kubectl cluster-info dump give me error: You must be logged in to the server (Unauthorized)?

kubectl config view is as follows:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://64859043D67EB498AA6D274A99C73C58.yl4.us-east-2.eks.amazonaws.com
  name: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
contexts:
- context:
    cluster: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
    user: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
  name: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
current-context: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - EKSDeepDive
      command: aws-iam-authenticator
      env: null
aws sts get-caller-identity
{
    "UserId": "AIDAJAKDBFFCB4EVPCQ6E",
    "Account": "629054125090",
    "Arn": "arn:aws:iam::629054125090:user/mrichman"
}
sonicintrusion commented 5 years ago

that env: null might be causing you a problem. have you tried removing it?

ciribob commented 5 years ago

I've having the exact same issue

Created a new cluster from scratch using a non root account

the env: null only appears when kubectl config view is run - its not in the main file

mrichman commented 5 years ago

I ended up blowing away the cluster and creating a new one. I never had the issue again on any other cluster. I wish I had better information to share.

pavel-khritonenko commented 5 years ago

Have the same issue, token could be verified using aws-iam-authenticator just well

VojtechVitek commented 5 years ago

same issue here.. is there any debugging information I could provide?

ciribob commented 5 years ago

Found the issue with help from AWS support - it appears the aws-iam-authenticator wasn't picking up the credentials properly from the path

Manually running

export AWS_ACCESS_KEY_ID=KEY
export AWS_SECRET_ACCESS_KEY=SECRET-KEY
aws-iam-authenticator token -i cluster-name

Then pulling out the token and running

aws-iam-authenticator verify -t k8s-aws-v1.really_long_token -i cluster-name

to make sure its all working

Oddly the aws-iam-authenticator did give me a token - I have no idea to what...

VojtechVitek commented 5 years ago

So we were hitting this issue with IAM users that didn't initially create the EKS cluster, they always got error: You must be logged in to the server (Unauthorized) error when using kubectl (even though aws-iam-authenticator gave them some token).

We had to explicitly grant our IAM users access to the EKS cluster in our Terraform code.

sonicintrusion commented 5 years ago

can you elaborate on that ^^ ? do you have to grant access to the EKS cluster specifically?

kenerwin88 commented 5 years ago

I stumbled upon this same issue ;). Did you find a fix @sonicintrusion?

whereisaaron commented 5 years ago

You need to map IAM users or roles into the cluster using the aws-auth ConfigMap. This is done automatically for the user who creates the cluster.

https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

Here is a script example for adding a role:

https://eksworkshop.com/codepipeline/configmap/

kenerwin88 commented 5 years ago

Thank you very much!

On Mar 25, 2019, at 9:48 PM, Aaron Roydhouse notifications@github.com wrote:

You need to map IAM users or roles into the cluster using the aws-auth ConfigMap. This is done automatically for the user who creates the cluster.

https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

kumarifet commented 5 years ago

Found the issue with help from AWS support - it appears the aws-iam-authenticator wasn't picking up the credentials properly from the path

Manually running

export AWS_ACCESS_KEY_ID=KEY
export AWS_SECRET_ACCESS_KEY=SECRET-KEY
aws-iam-authenticator token -i cluster-name

Then pulling out the token and running

aws-iam-authenticator verify -t k8s-aws-v1.really_long_token -i cluster-name

to make sure its all working

Oddly the aws-iam-authenticator did give me a token - I have no idea to what...

Thank you ,its working..

jaygorrell commented 5 years ago

env: null

This was it for me. That was actually in the file and was overriding my real env section with a specific profile to use.

aaronrryan commented 5 years ago

I'm not sure what user is given permissions when creating the EKS cluster through
the web console, so I ended up building the cluster using "eksctl", and then I was able to access the cluster with kubectl from the cli.

https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html

f-ld commented 5 years ago

Short story This happens too on badly created clusters

Long story Came here after a first attempt to create a cluster with eksctl. Actually, the creation failed for the following reason on some nodes:

Aug  2 12:58:17 ip-10-13-1-111 cloud-init[558]: Cloud-init v. 0.7.9 running 'modules:final' at Fri, 02 Aug 2019 12:58:17 +0000. Up 11.08 seconds.
Aug  2 12:58:17 ip-10-13-1-111 cloud-init[558]: 2019-08-02 12:58:17,458 - util.py[WARNING]: Failed running /var/lib/cloud/scripts/per-instance/bootstrap.al2.sh [1]

So this caused the cluster creation to fail with 25m timeout waiting for nodes to be ready. And when I tried a kubectl get nodes and got the error mentioned in this issue

mkamrani commented 4 years ago

AWS_SECRET_ACCESS_KEY

You can add them as environment variables in the config file as well:

env:

NapalmCodes commented 4 years ago

I figured this out too, the user you create the cluster with (whether console or cli) is the only user that can execute k8s api calls via kubectl. I find this kind of strange as we use deployment users to do this work they would not be administering the cluster via kubectl. Is there a way to assign api rights to another user other than a deployment account?

michael-burt commented 4 years ago

@napalm684 use this guide: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

With that said, it is not working for me when I try to add an IAM user

philborlin commented 4 years ago

In my case I created the cluster with a role and then neglected to use the profile switch when using the update-kubeconfig command. When I modified the command to be aws eks update-kubeconfig --name my-cluster --profile my-profile a correct config was written and kubectl started authenticating correctly.

What this did was modify my env to (spacing munged):

env:

  • name: AWS_PROFILE value: my-profile
nitrogear commented 4 years ago

the solution from @whereisaaron helped me. thanks a lot!

smaser-talend commented 4 years ago

I resolved this issue by checking/updating the date/time on my client machine.

0foo commented 4 years ago

Just wanted to add that you can add your credentials profile in your ~/.kube/config file. If your kubectl config view shows env: null this might be the issue.

 user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - <some name>
      command: aws-iam-authenticator
      env:
        - name: AWS_PROFILE
          value: "<profile in your ~/.aws/credentials file>"

https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

I don't like environmental variables personally and this is another option if you have a credentials file for AWS.

onprema commented 4 years ago

What about the use case where you have an "EKS admin" and user's can create their own clusters. As an admin, I don't want to be locked out of the clusters, and I don't want to have to tell each user to update the aws-auth configmap as @whereisaaron suggests. Is there a way I can incorporate the ability for admin users to have access to the cluster by default? (btw, user's will create they clusters by passings in a yaml config. i.e: eksctl create cluster foo -f config.yaml)

OscarCode9 commented 4 years ago

I had the same issue end I solved when I set the aws_access_key_id and aws_secret_access_key of the who has created the cluster on AWS (In me case, the root user) but I made a new profile in the .aws/credentials

for example new profile:

[oscarcode] aws_access_key_id = XXXX aws_secret_access_key = XXXX region = us-east-2

So in my kubernetes config has

exec: apiVersion: client.authentication.k8s.io/v1alpha1 args:

kflavin commented 4 years ago

So in the event that you are not the cluster creator, you are out of luck getting access?

michaelday008 commented 4 years ago

You need to map IAM users or roles into the cluster using the aws-auth ConfigMap. This is done automatically for the user who creates the cluster.

https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

Here is a script example for adding a role:

https://eksworkshop.com/codepipeline/configmap/

link does not work:

<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>C3E07E0A4D665FA7</RequestId>
<HostId>
9FUa2BJaw75T3fcUKA8OVv85u/tqknezUsI/AiUQ/YFkNhXNKsnQAMR41MfO19wWNEqfq6w/2ug=
</HostId>
</Error>
tabern commented 4 years ago

There are instructions for fixing this issue in the EKS docs and the customer support blog as well.

Vinay-Venkatesh commented 4 years ago

This happens when the cluster is created by user A and when u try accessing the cluster service using user B credentials . @OscarCode9 : Has explained it perfectly.

tobisanya commented 4 years ago

You need to map IAM users or roles into the cluster using the aws-auth ConfigMap. This is done automatically for the user who creates the cluster.

https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

Here is a script example for adding a role:

https://eksworkshop.com/codepipeline/configmap/

Looks like the link to the script example has been updated to https://eksworkshop.com/intermediate/220_codepipeline/configmap/

Pranjal-sopho commented 4 years ago

Found the issue with help from AWS support - it appears the aws-iam-authenticator wasn't picking up the credentials properly from the path

Manually running

export AWS_ACCESS_KEY_ID=KEY
export AWS_SECRET_ACCESS_KEY=SECRET-KEY
aws-iam-authenticator token -i cluster-name

Then pulling out the token and running

aws-iam-authenticator verify -t k8s-aws-v1.really_long_token -i cluster-name

to make sure its all working

Oddly the aws-iam-authenticator did give me a token - I have no idea to what...

worked for me...thanks a lot

lihonosov commented 4 years ago

How do I provide access to other users and roles after cluster creation? https://www.youtube.com/watch?time_continue=3&v=97n9vWV3VcU

aedcparnie commented 4 years ago

Unauthorized Error in Kubectl after modifying aws-auth configMap

I am not sure but I think I messed up the aws-auth configmap. After modifying it, I cannot find a way to authenticate again. Anyone who encountered the same problem and have a solution?

I tried to assume the EKS Cluster role and use the role in the kubeconfig but no luck.

Jean-Baptiste-Lasselle commented 4 years ago

Hi all, thank you so much for sharing all these info (course I have same issue here), and :

aws sts get-caller-identity | jq .Arn

Ok, now look how you can make your situation clearer :

export RICKY_S_CREATED_EKS_CLUSTER_NAME=the-good-life-cluster
# AWS REGION where Ricky created his cluster 
export AWS_REGION=eu-west-1
export BOBBY_S_BOURNE_ID=$(aws sts get-caller-identity | jq .Arn)
# So now, Bobby wants to kubectl into the Cluster Ricky created
# So Booby does this : 
aws eks update-kubeconfig --name ${RICKY_S_CREATED_EKS_CLUSTER_NAME} --region ${AWS_REGION}
# and that does not fire up any error, so Bobby's happy and thinks he can
kubectl get all
# Ouch, Booby's now in dismay, he gets a "error: You must be logged in to the server (Unauthorized)" ! 
# Okay, Bobby now runs this : 
aws eks update-kubeconfig --name ${RICKY_S_CREATED_EKS_CLUSTER_NAME} --region ${AWS_REGION} --role-arn ${BOBBY_S_BOURNE_ID}

kubectl get all

# And there you go, Now Bobby has an error, pretty explicit : He now knows how to test, whetjher or not, he can assume role of Ricky .. And there he smiles cause what he did, is trying to assume his own role ! 
# Got it , Bobby should assume role of Ricky, that way : 
export RICKY_S_BOURNE_IDENTITY=$(Ricky will give you that one)

aws eks update-kubeconfig --name ${RICKY_S_CREATED_EKS_CLUSTER_NAME} --region ${AWS_REGION} --role-arn ${RICKY_S_BOURNE_IDENTITY}

I'll be glad to discuss this with anyone, and I 'll feedback when I have finished solving this issue

Note :

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  mapRoles: |
    - rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-XXXXXXXXXXX
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
# Don't EVER touch what is above : when you retrieve your [aws-auth] ConfigMap from your EKS Cluster
# this section above will  already be there, with values very specific to your cluster, and  
# most importantly your cluster node's AWS IAM Role ARN 
# so there below, added mapped users
# but what we want is to add a role, not a specific user (for hot user management), os
# let's do it like they did it at AWS, for the Cluster nodes IAM Role, but with 
# groups such as admin and ops-user below
    - rolearn: WELL_YOU_KNOW_THE_ARN_OF_THE_ROLE_U_JUST_CREATED
      username: bobby
      groups:
        # bobby needs access to master ndes, to hit the K8S APi with kubectl, doesn't he? Sure he does.
        - system:masters
  mapUsers: |
    - userarn: arn:aws:iam::555555555555:user/admin
      username: admin
      groups:
        - system:masters
    - userarn: arn:aws:iam::111122223333:user/ops-user
      username: ops-user
      groups:
        - system:masters

Typical super admin / many devops setup, only it is just two users. found at https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

aws eks update-kubeconfig --name ${RICKY_S_CREATED_EKS_CLUSTER_NAME} --region ${AWS_REGION} --role-arn ${ARN_OF_THAT_NEW_ROLE_YOU_CREATED}

So, Roles only, no specific users (except a few only for senior devops, just in case) :

More fine grained permissions now

Refs. :

(See my aws-auth ConfigMap)

jbl@poste-devops-jbl-16gbram:~/gravitee-init-std-op$ kubectl get configmap/aws-auth --namespace kube-system
NAME       DATA   AGE
aws-auth   1      19d
jbl@poste-devops-jbl-16gbram:~/gravitee-init-std-op$ kubectl describe configmap/aws-auth --namespace kube-system
Name:         aws-auth
Namespace:    kube-system
Labels:       app.kubernetes.io/managed-by=pulumi
Annotations:  
Data
====
mapRoles:
----
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::XXXXXXXXXX:role/my-cluster-gateway-profile-role
  username: system:node:{{EC2PrivateDNSName}}
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::XXXXXXXXXX:role/my-cluster-front-profile-role
  username: system:node:{{EC2PrivateDNSName}}
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::XXXXXXXXXX:role/my-cluster-back-profile-role
  username: system:node:{{EC2PrivateDNSName}}
- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::XXXXXXXXXX:role/my-cluster-system-profile-role
  username: system:node:{{EC2PrivateDNSName}}

Events:  <none>

Events:  <none>

( XXXXXXXXXX is me obfuscating a value that any way, will be different for you )

Final update : success

Alright, what you cna read above was just tested functional so help yourselves (and can we close the issue ? @EppO @mortent @Atoms @flaper87 @joonas ?

update : I beg you pardon for the solution I gave does not answer the original issue, which was the case of a user trying to kubectl against a cluster he created himself

ProteanCode commented 4 years ago

Got this today and the cause & solution was different

If you created the cluster as some user, but not as a role / roled one (you can switch to roles in AWS console from IAM roles panel), and your kube config was created by using --role-arn parameter ie.

aws eks --region eu-central-1 update-kubeconfig --name my-cluster-name --role-arn arn:aws:iam::111222333444:role/my-eks-cluster-role

and got this error message then just remove a --role-arn parameter.

My understanding is that you can bind users to a role so they can perform operation on that specific cluster, but the user for some reason (maybe missing entry in Trusted entities) was not bound to the role at cluster creation phase. This is not an error since I suppose I could add myself to my cluster role and this would work fine

Adding users to roles is probably there: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

Aside of that - its very misleading when you log in as user with AdministratorAccess policy (bascially Allow *) and there is no assumption to take the cluster role

TL:DR remove --role-arn

Jean-Baptiste-Lasselle commented 4 years ago

can

hi @ProteanCode , actually :

# see https://aws.amazon.com/premiumsupport/knowledge-center/eks-iam-permissions-namespaces/
aws sts assume-role --role-arn arn:aws:iam::yourAccountID:role/yourIAMRoleName --role-session-name abcde
Jean-Baptiste-Lasselle commented 4 years ago

@ProteanCode neverthe less I will test that again, cause question is : why would any AWS IAM user see the Cluster when they aws eks list-clusters ? :

ProteanCode commented 4 years ago

@ProteanCode neverthe less I will test that again, cause question is : why would any AWS IAM user see the Cluster when they aws eks list-clusters ? :

  • well that 's the case, if I create the IAM User , being authenticated to AWS IAM, with my own personal user :

    • then I have to securely "send" my team members, their AWS ~/.aws/credentials file
    • Best way to secure that, is to discuss : One thing we did with "Ricky", is that I GPG encrypted the file, sent the file to him on slack, and sent him the link to my GPG public key https://keybase.io/jblasselle : there he knows it was actually me who sent it.
    • I am thinking of Hashicorp vault to do a lot better : that 's a secret management case, and I think HashiCorp vault definitely is the good way to go , what do you think ?

I am a 90% dev, 10% ops, and also the only user in that (private) project so my way of thinking is not really team-oriented.

I followed the AWS guideline to create a separate Administrators IAM group & user for any non-root related operation (which is totally fine), but somewhere in their guides they wrote to create the kube config with --role-arn so I blindly followed it without understanding the consequences.

Since my account was never bound to the cluster role the kubectl told me that I am unauthorized even if I am the owner of a cluster, this is neither good nor bad, but for sure it can be misleading for single developer.

I suppose most of time people assign resources to a roles, and then user to roles for authorization simplicity. You are totally right in what you wrote, but since we can assign user to role there would be no need to share any aws credentials.

I will for sure remanage the group and roles in my project to increase the security. Currently I am writing an API that will handle scaling the EKS nodes when external customer do a recursive purchase so I think about separate account on which shopping backend will operate.

asingh014 commented 4 years ago

Please try upgrading your kubectl version to > 1.18.1 and giving it a try (we are using AAD to manage access for users, when creating the aks cluster only our admin context was able to execute kubectl commands but once we upgraded the kubectl for our users with cluster-admin access were running on, they could communicate with the cluster successfully)

Jean-Baptiste-Lasselle commented 4 years ago

I will for sure remanage the group and roles in my project to increase the security. Currently I am writing an API that will handle scaling the EKS nodes when external customer do a recursive purchase so I think about separate account on which shopping backend will operate.

hi @ProteanCode , interesting project, there many autoscalers that do exactly what you describe, eg you could have a look a kubeone. I'd tell you then taht "someone (an IAM user)" will conduct those scaling operations o behalf of the human user. I'd call that some one a robot. Think of it all as this : you are alone as a human, but you have a whole team of robots, and you are their boss. You will not talk to them, so you will delegate the role to a robot, boss of all robots. The approach I describe is a very basic one, and I would tell you my best adivce is to look at OIDC AWS IAM integration. With that team yo u wil be able o track who did what when, where, and why. Accountability.

Thank you for your answer, and bon courage.

Jean-Baptiste-Lasselle commented 4 years ago

Please try upgrading your kubectl version to > 1.18.1 and giving it a try (we are using AAD to manage access for users, when creating the aks cluster only our admin context was able to execute kubectl commands but once we upgraded the kubectl for our users with cluster-admin access were running on, they could communicate with the cluster successfully)

That's because you AAD does this for you , and you do not "see" it. Di you not give permissions to team members in AAD before updating ? Yes you did. @ProteanCode does not use any IAM solution (Identity and Access Management) solution to do this all, as explained by him he (thinks) he has got just one user (himself). Absolutely nothing to do with upgrading Kubernetes version.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 3 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/aws-iam-authenticator/issues/174#issuecomment-766151541): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.