Open tomitesh opened 1 year ago
try to modify aws-auth configmap using kubectl (not sure which profile to use for kubeconfig).
EKS currently locks the cluster to be accessible only by the IAM User/Role that created the cluster. Whichever role you've configured to be assumed by the eks-controller
, will be the one you need to use. If you're using cross account resource management, it's the role you're assuming in the target account. If you're using IRSA, it's the role attached to the service account. Or if you're using hard coded credentials, it's the role associated with those.
That's great. Thanks for your reply and time on this request.
will it be a good idea to specify additional property i.e AdditionalRoles [] as part of eks defination?
Totally agree! We also need convenient way for managing aws-auth. Right now I was forced to fix the role with which the cluster was created and allow me to make an assume role on it. Then I was able to fix aws-auth, but it is far away from automation. My target is to create everything in amazon account with ACK controllers in fully automatic way.
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
Cluster access management controls are now the recommended replacement for aws-auth config map, and fully support by the EKS ACK controller
@mikestef9 Hi! Thanks! It is very interesting information. I will try to use new approach on new clusters.
Issues go stale after 180d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 60d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/aws-controllers-k8s/community.
/lifecycle stale
/remove-lifecycle stale
Describe the bug We have established a control cluster using terraform, which includes an active eks controller. Additionally, we have configured IRSA (IAM Roles for Service Accounts) and utilized a service account with the eks controller.
When we create a new cluster (testcluster) by generating an eks controller yaml file within the control cluster, the process successfully creates a new cluster. It is worth mentioning that I have also created nodegroups, roles, add-ons, and other related components, but these details are not pertinent to the current issue focused solely on the cluster.
By default, eks provides cluster access to the creating identity, in this case, the IRSA service account.
Could you please provide guidance on how to modify the aws-auth file immediately after creating an eks cluster using the eks controller, to grant cluster access to another user or role (specifically, the devops user/role used for logging into the AWS console)?
Steps to reproduce
Expected outcome A concise description of what you expected to happen. want to know, how can we grant additional access to cluster immediately after creating eks cluster using eks controller. I can't use kubectl in this scenario to update aws-auth
Environment dev