Open faiq opened 2 years ago
In looking at this we would need to modify this to use the new role as well https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/93897be636031fac812765d95a018fe61dbd689f/pkg/cloud/services/iamauth/reconcile.go#L51
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
From office hours 01/01/23: Makes sense to scope down permissions of non-EKS clusters.
/triage accepted /priority important-longterm /help
@dlipovetsky: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
The hard coded instance profile of cluster-template-eks-machine-deployment-only.yaml
introduces several issues:
Like with non-EKS clusters instance profile should not be hard coded but rely on user input so the user is in charge and control of the instances profiles being assigned to different clusters
In looking at this we would need to modify this to use the new role as well
I think there are 2 issues here:
arn::aws::iam
) is hard-coded, but the prefix is not global, and does not work in some regions. (There was recent work to address this in https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/3926, but this work had to be reverted in https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/3982)Thanks @dlipovetsky I was accidentally referencing on a test. Thank you for clarifying this.
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
/triage accepted
(org members only)/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
/kind feature
Describe the solution you'd like [A clear and concise description of what you want to happen.]
As a EKS cluster operator, I'd like to have a specific set of roles for my AWS worker nodes use.
Currently the role
AWSIAMRoleNodes
is a shared role for workers that are for regular clusters and EKS clusters.My proposal is to create a new Instance Profile and Role specific for nodes that are part of an EKS cluster and remove the
ManagedPolicyArns
in the role pasted aboveThis would make the AWSIAMRoleNodes to be the following
and a new role called
eks-nodes.cluster-api-provider-aws.sigs.k8s.io
which would like thisAlong with a new InstanceProfile to use that Role as follows
I think this will warrant code changes in
clusterawsadm
as well as the tests to use this newly created role+instance profile https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/eb5da5870f9147624430de1b67e55843991ed7d0/test/e2e/data/eks/cluster-template-eks-machine-deployment-only.yaml#L32Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):