Closed mseiwald closed 5 years ago
Can the agent run in a pod? It'd be nice to enable on-deman rather than having permanently enabled. I will need to read up on this, as it's rather new to me. With our current model, we avoid installing software during node bootstrap at all costs, and nodes are effectively treated as immutable. However, it is important to provide an access facility that would work in all cases, and SSH currently serves this purpose (although it needs to be explicitly enabled during cluster creation, but we are likely to add per-nodegroup SSH in the future). Perhaps the agent could be added to the official AMI (or maybe it's there already?).
We've been installing ssm-agent in user-data in kube-aws and from my exp, I'd say it isn't an optimal solution.
The first reason comes up in my mind at the moment is that you'll need more and more binaries to be present on nodes. Enhancing eksctl every time such request arises won't work long term for project maintainers.
Plus, it does slow down node startup.
I would instead suggest documenting the process of (1)adding arbitrary tools to the official AMI using packer
and (2)specifying the resulting AMI ID like eksctl create cluster --node-ami AMI
.
You do need to add IAM policy for ssm-agent to work. For that, enabling eksctl create cluster
to accept an IAM role created by the user would work. eksctl
should not create the IAM role in that case. Instead, it should just create IAM policies and attach it to the precreated role. The attachment can be completely done in CFN templates.
WDYT?
@mumoshu I very much share the same view. From my perspective, I would rather avoid installing any software at node startup, first of all it's aways subject to download and other I/O errors, secondly it's not always possible to control version you get. Finally, it's more code to add to bootstrap sctipt, and more code to maintain and debug. Additionally, it will be hard for us to support this on multiple different Linux distros (we have AL2 and Ubuntu now, but will probably have more as demand grows, and our bootstraps scripts must be minimal to make it easy to support each of the distros).
You do need to add IAM policy for ssm-agent to work.
I would be happy to add --allow-ssm-agent
, similar to how we have other IAM add-ons at the moment. This would let users experiment more, and explore option of running the SSM agent as a pod (via a daemonset or otherwise).
AWS in documentation says their AMI’s have the agent installed by default. It might be there and just have to be (optionally) started. Have you checked @mseiwald?
@errordeveloper Completely understood! Let's add --allow-ssm-agent
then.
explore option of running the SSM agent as a pod (via a daemonset or otherwise).
I've just published my working example of ssm-agent-as-daemonset at https://github.com/mumoshu/kube-ssm-agent
Other than creating a daemonset, it was just a matter of running the below to give the IAM permissions required for ssm-agent to work:
$ aws iam attach-role-policy --role-name eksctl-amazing-creature-154580785-NodeInstanceRole-RXNVQC8YTLP7 --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM
So, I would just add arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM
to PolicyARNs
when --allow-ssm-agent
is provided.
Does that sound good to you?
With #411, you should be able to run eksctl create cluster --node-role-policies arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM
.
@whereisaaron The EKS AMI does not include SSM agent. From what I see there are three possibilities
--node-role-policies
helps with the setup 👍
From my pov this can be closed.
Please keep in miss that the flag is '--temp-node-role-policies', as it is a temporary flag that will get moved to config file in a few releases.
On Thu, 24 Jan 2019, 5:56 am Michael Seiwald <notifications@github.com wrote:
Closed #237 https://github.com/weaveworks/eksctl/issues/237.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/weaveworks/eksctl/issues/237#event-2093566642, or mute the thread https://github.com/notifications/unsubscribe-auth/AAPWS5Kie2w-m-fSPOgugAxrlzZ49ZHBks5vGUsOgaJpZM4XJxUo .
@errordeveloper a config file is good and I know @mumoshu prefers that, but there is also not much wrong IMHO with being able to set everything from a million options on the command line too. Gcloud tends to do this, and it is actually quite pleasant and fast and easy to work with and automate and version.
I'd actually prefer some sort of universal mapping between config and command line options, either from one of those uber CLI option mapping libraries or a --set x.y.z=foo
approach like you get with helm
; rather than taking away the command line option, just because there is an equivalent config file option. It's not critical, but that's my personal preference.
The key point is to add iam policy AmazonSSMManagedInstanceCore
to node instance profile, AWS will automatically install SSM agent to that nodes, if the nodes don't have ssm agent installed. No user-data required
It has been added with latest eksctl version and I can ssm agent connect directly to these worker nodes.
The key point is to add iam policy
AmazonSSMManagedInstanceCore
to node instance profile, AWS will automatically install SSM agent to that nodes, if the nodes don't have ssm agent installed. No user-data requiredIt has been automatically added with latest eksctl version and I can ssm agent connect directly to these worker nodes.
Correct. The SSM agent is now baked in EKS AMIs, so eksctl now adds the AmazonSSMManagedInstanceCore
policy by default.
Why do you want this feature? Having SSM agent installed enables running commands and SSH-like sessions using the new SSM Session manager. This would remove the need to add SSH public keys to the instance and opening ports.
What feature/behavior/change do you want? The following changes would have to be made:
AmazonEC2RoleforSSM
managed policy. docs