Closed tsndqst closed 3 years ago
I am in the same boat, thanks for submitting. I'm currently considering your workaround 2) (open access to entire subnet). Can you explain how your workaround 1) works? Is there an option to define a security group rule based on source EKS cluster?
- Open access to an entire EKS cluster. This results in one EKS cluster per app. If we use this method it would have high cost and low pod density.
@ezra-freedman I may not have described that well. I meant that you configure all workers for that cluster to have the same security group when you create them. Doing it this way would avoid targeting specific workloads (pods) to specific workers or worker groups. But you would have to deploy specific workloads to specific clusters.
Can somebody from AWS guide me as to whether this is something I could submit a PR for (and if so, which repository), or whether this is managed by AWS-proprietary code?
Any chance this will land in Q1 or Q2 2021?
@bsmedberg-xometry Presumably this is Amazon proprietary code for Fargate. Given that it's already supported by both ECS Fargate and now EKS EC2 instances (since https://aws.amazon.com/blogs/containers/introducing-security-groups-for-pods/ ), it seems like most of the code to do it is already written.
@mikestef9 Any update on this?
Hey all,
You can now assign custom security groups to pods running on AWS Fargate. This is available on v1.18 and above clusters, and you need to be running the latest EKS platform version for the corresponding Kubernetes minor version.
One important note to keep in mind - Previously, every Fargate pod got assigned the EKS cluster security group, which ensured the Fargate pod could communicate with the Kubernetes control plane and join the cluster. With custom security groups, you are responsible for ensuring the correct security group rules are opened to enable this communication. The easiest way to accomplish this is to simply specify the cluster security group ID as one of the custom security groups to assign to Fargate pods.
I upgraded my cluster to v1.20, with eks.2 platform version, and yet my Fargate pods are stuck in a pending state since 30min.
I whitelisted the DNS ports, and added the cluster security group inside the groupIds of the Security Group Policy that Im attaching to the pod.
There are proper security group annotations on the pod too.
Name: xxxx Namespace: xxxxx Priority: 2000001000 Priority Class Name: system-node-critical Node: <none> Labels: app=xxxx eks.amazonaws.com/fargate-profile=xxx role=xxx rollouts-pod-template-hash=85846d84b8 Annotations: CapacityProvisioned: 0.25vCPU 0.5GB Logging: LoggingDisabled: LOGGING_CONFIGMAP_NOT_FOUND fargate.amazonaws.com/pod-sg: sg-xxxx,sg-xxxx,sg-xxxx kubernetes.io/psp: eks.privileged Status: Pending
Something I may be missing?
It is very difficult to debug this issue, there are no errors anywhere.
Any Solution ???
Eks Version 19
@Hunter-Thompson @quickbooks2018 You might better open a support ticket within the AWS console. The will help you quickly.
@mikestef9, it looks like this ticket was closed prematurely, as the solution you offered doesn't actually solve the specified problem. At the moment, fargate profiles are added to the security group created during cluster creation that has open security rules both in and out.
What they and myself are after, is the ability to specify which security groups the fargate profiles are added to during profile specification, not the individual pods.
Community Note
Tell us about your request Add option to specify custom security groups for Fargate Profiles in EKS.
Which service(s) is this request for? EKS
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard? We control application access to RDS via VPC Security Groups. Without the option of specifying security groups in Fargate Profiles we would probably could not restrict RDS access down to only those things that need access. For example, there currently does not appear to be a way to utilize option 3 from https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.Scenarios.
Are you currently working around this issue? Currently we are creating our own EC2 worker node groups in ASGs with specific security groups. This gives us the access control we need but means the pod density is very low on these hosts (the only apps that run on these nodes are those that are allowed to access a specific RDS instance).
There are other workarounds but they are all less than ideal:
Additional context In addition to RDS there are other components that utilize security groups for access control such as ElastiCache and Elasticsearch Service.
Attachments If you think you might have additional information that you'd like to include via an attachment, please do - we'll take a look. (Remember to remove any personally-identifiable information.)