atlassian / escalator

Escalator is a batch or job optimized horizontal autoscaler for Kubernetes
Apache License 2.0
646 stars 58 forks source link

[Question] How to assume IAM role inside the escalator pod? Getting 403 despite instructions #231

Open FilipSwiatczak opened 11 months ago

FilipSwiatczak commented 11 months ago

Hello guys, It's a wonderful project and I've almost got it working. Having followed Readme instructions in (https://github.com/atlassian/escalator/blob/master/docs/deployment/aws/README.md) I have those ticked off:

Given all that I'm still getting 403 on attempt to assume role.
AccessDenied: User: arn:aws:sts::XXX:assumed-role/eksctl-bitbucketpipelines-nodegro-NodeInstanceRole-XXX is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::XXX:role/bitbucket-pipelines-escalator-role\n\tstatus code: 403

1) I am missing something? Is the documentation complete? 2) Other sources suggest creating OIDC Provider for the cluster (https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) I've done that with eksctl and it has no impact on it's own 3) Is there a specific trust relationship on the IAM role required before the escalator pod can assume it please?

Any pointers would be much appreciated. Thank you!

awprice commented 11 months ago

Thanks for giving Escalator a go @FilipSwiatczak!

Based on the following error:

AccessDenied: User: arn:aws:sts::XXX:assumed-role/eksctl-bitbucketpipelines-nodegro-NodeInstanceRole-XXX is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::XXX:role/bitbucket-pipelines-escalator-role\n\tstatus code: 403

I'd say the trust relationship isn't setup correctly between the two roles to allow eksctl-bitbucketpipelines-nodegro-NodeInstanceRole-XXX to assume bitbucket-pipelines-escalator-role.

Have a look at this page on how to allow a role to assume another role - https://nelson.cloud/aws-iam-allowing-a-role-to-assume-another-role/, it has instructions on how to allow assuming a role either in the same account or in a different account.

awprice commented 11 months ago

I'd also like to mention that documentation on how to configure a role to another assume role is going to be missing from our documentation as it will depend on the configuration of the end user's cluster/AWS accounts and we can't cater for all scenarios.

FilipSwiatczak commented 11 months ago

thanks @awprice, it worked with these two changes:

  1. Run EksCtl to create OIDC for the Cluster like: eksctl utils associate-iam-oidc-provider --cluster <cluster-name> --approve --region <your-region>

  2. and then modify Trust relationship on your aws Role by adding:

        {
            "Effect": "Allow",
            "Principal": {
                -- the exact name of the sts role the pod starts under, right now can be gleaned from the initial error log on the pod
                "AWS": "arn:aws:sts::ACCOUNT:assumed-role/eksctl-CLUSTER_NAME-nodegro-NodeInstanceRole-RANDOM_VALUE_PER_CLUSTER"
            },
            "Action": "sts:AssumeRole"
        }
  3. modify Policy which the Role references with:

        {
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::ACCOUNT:role/eksctl-CLUSTER_NAME-nodegro-NodeInstanceRole-*"
        }

So while this works, it's not fully automated as I can't find a way to fetch the sts role the pod starts under from the cluster. If you know that or how to structure that better, please share :)

I've mostly raised this question to save other people time, to have a copy paste solution that would be as easy as the rest of instructions in the project Readme!

FilipSwiatczak commented 11 months ago

Also @awprice if escalator runs in the same node group that it controls, how can it prevent tainting it's own node and forcing escalator re-deployment? Really can't find an answer in the docs! On scale down, using the Oldest-first approach, my setup taints the original node on which escalator pod runs first:

time="2023-10-20T16:10:42Z" level=info msg="Sent delete request to 1 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:10:42Z" level=info msg="Reaper: There were -1 empty nodes deleted this round"
time="2023-10-20T16:10:42Z" level=info msg="untainted nodes close to minimum (1). Adjusting taint amount to (0)"
time="2023-10-20T16:10:42Z" level=info msg="Scaling Down: tainting 0 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:10:42Z" level=info msg="Tainted a total of 0 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:11:08Z" level=info msg="Signal received: terminated"
time="2023-10-20T16:11:08Z" level=info msg="Stopping autoscaler gracefully"
time="2023-10-20T16:11:08Z" level=info msg="Stop signal received. Stopping cache watchers"
time="2023-10-20T16:11:08Z" level=fatal msg="main loop stopped"
rpc error: code = NotFound desc = an error occurred when try to find container "50d71de1cd6378c134bcc3870d3c378860855a379a40d3a7163cf4a913733a6a": not found%  

I apologise if those are noobish questions, I'm not a kubernetes expert! (yet!)

FilipSwiatczak commented 11 months ago

Also @awprice if escalator runs in the same node group that it controls, how can it prevent tainting it's own node and forcing escalator re-deployment? Really can't find an answer in the docs! On scale down, using the Oldest-first approach, my setup taints the original node on which escalator pod runs first:

time="2023-10-20T16:10:42Z" level=info msg="Sent delete request to 1 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:10:42Z" level=info msg="Reaper: There were -1 empty nodes deleted this round"
time="2023-10-20T16:10:42Z" level=info msg="untainted nodes close to minimum (1). Adjusting taint amount to (0)"
time="2023-10-20T16:10:42Z" level=info msg="Scaling Down: tainting 0 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:10:42Z" level=info msg="Tainted a total of 0 nodes" nodegroup=bitbucketpipelines-on-demand-escalator
time="2023-10-20T16:11:08Z" level=info msg="Signal received: terminated"
time="2023-10-20T16:11:08Z" level=info msg="Stopping autoscaler gracefully"
time="2023-10-20T16:11:08Z" level=info msg="Stop signal received. Stopping cache watchers"
time="2023-10-20T16:11:08Z" level=fatal msg="main loop stopped"
rpc error: code = NotFound desc = an error occurred when try to find container "50d71de1cd6378c134bcc3870d3c378860855a379a40d3a7163cf4a913733a6a": not found%  

I apologise if those are noobish questions, I'm not a kubernetes expert! (yet!)

Using instance protection like:

# protect instance on which escalator is running from termination
aws autoscaling set-instance-protection --instance-ids XXX --auto-scaling-group-name eks-bitbucketpipelines-ng-on-demand-XXX --protected-from-scale-in --region eu-west-1

also does not work and the Node is terminated after being tainted. Though if it did work it would probably leave escalator stuck trying to remove the node over and over.

awprice commented 11 months ago

@FilipSwiatczak No problem!

So while this works, it's not fully automated as I can't find a way to fetch the sts role the pod starts under from the cluster. If you know that or how to structure that better, please share :)

We tend to use IAM roles for service accounts on EKS, as this will prevent the need to deal with node instance roles. This documentation from AWS gives a good introduction and steps to use them: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html

if escalator runs in the same node group that it controls, how can it prevent tainting it's own node and forcing escalator re-deployment? Really can't find an answer in the docs! On scale down, using the Oldest-first approach, my setup taints the original node on which escalator pod runs first:

We avoid this by running multiple node groups in our clusters and running Escalator on a node group that isn't being scaled up/down by Escalator to prevent Escalator terminating the node that it itself is running on.

Escalator is primarily designed for scaling node groups that are running job-based workloads - so ones that will end. Escalator itself could be considered a service based workload - meaning that it will run forever. So it isn't really the sort of thing that should be run on the node groups that Escalator is scaling.

FilipSwiatczak commented 11 months ago

We tend to use IAM roles for service accounts on EKS, as this will prevent the need to deal with node instance roles. This documentation from AWS gives a good introduction and steps to use them: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html

thank you @awprice ! I've followed above link and at the very end of Pod checks realised the Escalator pod does not have AWS_WEB_IDENTITY_TOKEN_FILE set. Those docs suggest an amazon-eks-pod-identity-webhook is required to run to inject the Token but I suspect you are using kube2iam instead right? Thanks again for your patience

FilipSwiatczak commented 11 months ago

It appears when the escalator is deployed in a separate node-group, with custom label escalator:worker at both node and pod level, escalator does't see any cpu or mem utilisation (0). It only works when it's in the same node group for me.

apiVersion: v1
kind: ConfigMap
metadata:
  name: escalator-config
  namespace: kube-system
data:
  nodegroups_config.yaml: |
    node_groups:
      - name: "bitbucketpipelines-ng-spot"
        label_key: "escalator"
        label_value: "worker"

With this and the IAM injection issue I'm a bit stuck. Are there any more complete deployment examples in existence please?

FilipSwiatczak commented 11 months ago

When escalator is attempting to scale node-group different to one it's deployed in, it throws:

time="2023-10-24T10:46:18Z" level=info msg="Node IP.eu-west-1.compute.internal, aws:///eu-west-1c/ID ready to be deleted" drymode=false nodegroup=bitbucketpipelines-ng-spot
time="2023-10-24T10:46:18Z" level=error msg="failed to terminate node in cloud provider IP.eu-west-1.compute.internal, aws:///eu-west-1c/ID" error="node ip.eu-west-1.compute.internal, aws:///eu-west-1c/id belongs in a different node group than eks-bitbucketpipelines-ng-spot-id"
time="2023-10-24T10:46:18Z" level=fatal msg="node ip.eu-west-1.compute.internal, aws:///eu-west-1c/id belongs in a different node group than eks-bitbucketpipelines-ng-spot-id"
awprice commented 11 months ago

@FilipSwiatczak Some answers to your questions: