aws-controllers-k8s / community

AWS Controllers for Kubernetes (ACK) is a project enabling you to manage AWS services from Kubernetes
https://aws-controllers-k8s.github.io/community/
Apache License 2.0
2.39k stars 253 forks source link

Unable to modify aws-auth configmap for cluster created using EKS controller #1828

Open tomitesh opened 1 year ago

tomitesh commented 1 year ago

Describe the bug We have established a control cluster using terraform, which includes an active eks controller. Additionally, we have configured IRSA (IAM Roles for Service Accounts) and utilized a service account with the eks controller.

When we create a new cluster (testcluster) by generating an eks controller yaml file within the control cluster, the process successfully creates a new cluster. It is worth mentioning that I have also created nodegroups, roles, add-ons, and other related components, but these details are not pertinent to the current issue focused solely on the cluster.

By default, eks provides cluster access to the creating identity, in this case, the IRSA service account.

Could you please provide guidance on how to modify the aws-auth file immediately after creating an eks cluster using the eks controller, to grant cluster access to another user or role (specifically, the devops user/role used for logging into the AWS console)?

Steps to reproduce

  1. create a cluster (control) , install eks with IRSA (https://aws-controllers-k8s.github.io/community/docs/user-docs/irsa/)
  2. create a test cluster (testcluster) using eks controller yaml (let me know if you need a sample yaml. i have not shared as i feel it's not important)
  3. try to modify aws-auth configmap using kubectl (not sure which profile to use for kubeconfig).

Expected outcome A concise description of what you expected to happen. want to know, how can we grant additional access to cluster immediately after creating eks cluster using eks controller. I can't use kubectl in this scenario to update aws-auth

Environment dev

RedbackThomson commented 1 year ago

try to modify aws-auth configmap using kubectl (not sure which profile to use for kubeconfig).

EKS currently locks the cluster to be accessible only by the IAM User/Role that created the cluster. Whichever role you've configured to be assumed by the eks-controller, will be the one you need to use. If you're using cross account resource management, it's the role you're assuming in the target account. If you're using IRSA, it's the role attached to the service account. Or if you're using hard coded credentials, it's the role associated with those.

tomitesh commented 1 year ago

That's great. Thanks for your reply and time on this request.

will it be a good idea to specify additional property i.e AdditionalRoles [] as part of eks defination?

gecube commented 1 year ago

Totally agree! We also need convenient way for managing aws-auth. Right now I was forced to fix the role with which the cluster was created and allow me to make an assume role on it. Then I was able to fix aws-auth, but it is far away from automation. My target is to create everything in amazon account with ACK controllers in fully automatic way.

ack-bot commented 8 months ago

Issues go stale after 180d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 60d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

mikestef9 commented 8 months ago

Cluster access management controls are now the recommended replacement for aws-auth config map, and fully support by the EKS ACK controller

https://aws.amazon.com/blogs/containers/a-deep-dive-into-simplified-amazon-eks-access-management-controls/

gecube commented 8 months ago

@mikestef9 Hi! Thanks! It is very interesting information. I will try to use new approach on new clusters.

ack-bot commented 1 month ago

Issues go stale after 180d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 60d of inactivity and eventually close. If this issue is safe to close now please do so with /close. Provide feedback via https://github.com/aws-controllers-k8s/community. /lifecycle stale

gecube commented 1 month ago

/remove-lifecycle stale