Closed AlexWang-16 closed 2 years ago
My colleague at work has provided me with guidance on the solution to this problem. This solution is specific to GitLab Runner in Kubernetes. Hopefully it will help someone who is also stuck in the same situation.
When deploying the runner using helm, you need to add a podAnnotations
property iam.amazonaws.com/role
as the sub-property under runners
in the values.yml file.
It should look something like this:
runners:
podAnnotations:
iam.amazonaws.com/role: my-iam-role
Setting iam.amazonaws.com/role
directly under the podAnnotations
provided in values.yml is incorrect because GitLab runner pod is used to checkin with the GitLab server for new jobs to execute and not the actual Pod that will be executing the CICD pipeline. This is done by the executor. By adding podAnnotations
the way specified above, the executor will contain the annotation to obtain the required IAM role.
The steps will be as follows
podAnnotations
section as specified above.helm repo add gitlab https://charts.gitlab.io
helm install --namespace <NAMESPACE> gitlab-runner -f <CONFIG_VALUES_FILE> gitlab/gitlab-runner
If you are updating an existing installation:
podAnnotations
section as specified above.helm repo update
helm upgrade --namespace <NAMESPACE> -f <CONFIG_VALUES_FILE> <RELEASE-NAME> gitlab/gitlab-runner
The Problem
I'm trying to assign a role to a gitlab runner deployment in EKS. When I look at the specs.templates.metadata.annotations, I can clearly see the IAM role written.
However, when I execute my CICD pipeline using the runner which runs
aws sts get-caller-identity
, I get the following error message:Unable to locate credentials. You can configure credentials by running "aws configure".
Looking at Kube2IAM logs, I see the following warning and error messages:
time="2022-04-03T17:24:31Z" level=warning msg="Using fallback role for IP 172.x.x.x" time="2022-04-03T17:24:31Z" level=info msg="GET /latest/meta-data/iam/security-credentials/ (200) took 0.026894 ms" req.method=GET req.path=/latest/meta-data/iam/security-credentials/ req.remote=172.x.x.x res.duration=0.026894 res.status=200 time="2022-04-03T17:24:31Z" level=warning msg="Using fallback role for IP 172.x.x.x" time="2022-04-03T17:24:31Z" level=error msg="Error assuming role AccessDenied: User: arn:aws:sts::xxxxxxxxxxxx:assumed-role/ireland-eks-nprxxxxxxxxx/i-xxxxxxxxxx is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::xxxxxxxxxxxx:role/fallback-role\n\tstatus code: 403, request id: c3d3b597-b00f-xxxx-xxxx-xxxxxxxxxx" ns.name=gitlab-runner pod.iam.role="arn:aws:iam::xxxxxxxxxxxx:role/fallback-role" req.method=GET req.path=/latest/meta-data/iam/security-credentials/fallback-role req.remote=172.x.x.x time="2022-04-03T17:24:31Z" level=info msg="GET /latest/meta-data/iam/security-credentials/fallback-role (500) took 81.617168 ms" req.method=GET req.path=/latest/meta-data/iam/security-credentials/fallback-role req.remote=172.x.x.x res.duration=81.617168 res.status=500
It looks like Kube2IAM was not able to detect the IAM role specified in the Pod annotations.
Here's how the GitLab Runner's Deployment looks like
What I've tried
kube2iam-2.6.0
to ensure the latest release is being utilizedspec.metadata.annotations
.I'm out of ideas. I would appreciate any suggestions to resolve this issue.