Open mikestef9 opened 2 years ago
Any other AWS services you've used that solve similar problems in an elegant way?
From the idea, it sounds similar to the AWS Firewall Manager.
proposes worker nodes in separate account from control plane, but this is not on our radar
Then it's a pretty mush useless proposal. Thanks.
@pkit can you elaborate on use case/requirements for that feature? This issue is not meant to be any specific proposal, instead looking to gather feedback
@mikestef9 we would like to have an ability to run customer workloads on their AWS account. There are two options for that:
@pkit out of curiosity how would you coordinate cluster upgrades in scenario #2? As your users/teams (accounts) expand I assume the complexity of coordinating a control plane upgrade across all teams using the same cluster may increase. For example, some teams may require a new CP version for a new feature while other teams may need to stay on an older version for compatibility issue of the app they are running on their nodegroup(s). And yes you would have the same potential complexity with highly consolidated multi-tenant clusters in the same account.
Just wondering how you (are planning to?) approach this in your org. Thanks!
@mreferre Not really, it's not an internal use case. And for external users all upgrades are centralized. I.e. mainly users run their applications but they are not able to manage their own kubernetes objects directly. Think about something akin to Knative, where the API is in our control.
Interesting. Thanks @pkit. What is the use case to distribute nodegroups across different accounts out of curiosity? The scenario you are describing seems on one hand very "abstracted" to the user (in a good way) but on the other hand the deployment requirement (centrilized CP / multiple nodegroups in multiple accounts) seems not to be as "abstracted".
@mreferre Just a question of compute visibility and governance. Like some organizations do not like to run their stuff on the compute they don't own. It is also kind of a question of security, as for example GPU workloads cannot be sandboxed tight enough.
My goal is to minimize the number of clusters I need to manage (especially wrt. upgrades and addons) and minimize cost while having many "workload accounts" as recommended by Best Practices for Organizational Units with AWS Organizations.
We would like the ability to vend Kubernetes clusters to internal customers and allow them access to the kubernetes APIs for each of those customers via platform tooling that a centralized platform team controls. Some of our teams would require access to the raw kubernetes APIs but most would not. We would also manage the upgrades and maintenance of those vended k8s clusters from a central operations team. The workloads on the vended clusters would include common operational and security tooling as well as the customer workloads. The vended clusters would live in accounts segmented on their workloads (product they are running) and their environment (dev, stage, prod). However we would also like to see a centralized solution similar to CAPI, Crossplane or ACK that can manage the lifecycle of the clusters from some central "control plane" cluster managed by our operations and platform teams.
Community Note
Tell us about your request Looking to start a discussion and get some feedback to help guide our strategy around helping customers administer clusters that span multiple accounts.
Which service(s) is this request for? EKS
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard? AWS encourages users to use multiple AWS accounts as a way to reduce blast radius and help organizations manage permissions.
Kubernetes provides its own mechanism for isolation of compute into namespaces.
How can we make it easier for EKS users to manage clusters across AWS accounts?
Questions we have
Do you use AWS organizations? If so, do you have a single or multiple top level AWS organizations?
What is your multi account strategy with EKS today?
Do you have a tagging strategy for your EKS clusters?
Open ended: What would be your ideal multi account setup, and what features could EKS build to help? #307 proposes worker nodes in separate account from control plane, but this is not on our radar. We still believe the cluster should live in the same account, and we can provide tooling to help manage and view clusters across accounts.
Additional context We see two distinct problem areas to solve:
Lifecycle management This includes creating and managing EKS clusters from a central account that live in different accounts. Note that AWS Controllers for Kubernetes (ACK) has a feature that addresses this requirement today.
Visibility List all clusters across accounts in EKS console from a central account. An example question to answer here "What clusters that I'm responsible for are running supported versions of Kubernetes?"
Any other AWS services you've used that solve similar problems in an elegant way?