Open Shaiou opened 4 years ago
Are you running this on EKS fargate?
Apologies for the delay. I'm not running EKS Fargate. Only the cluster itself is manage. I'm running cusotm nodes/ASG with the standard node ami from AWS.
Even I got this issue, it was on EKS Fargate. You can maybe try with KIAM.
I also found out that consul makes a call to instance metadata, which is why it was failing on fargate. https://www.hashicorp.com/blog/consul-auto-join-with-cloud-metadata
Thx, I figured out how to work around it. Just wanted to notify the issue so they make sure they are up to date with all the latest AWS authentication methods in case they are not just using the AWS sdk
Great that you fixed it! Can you share your solution?
My bad for the confusion. I did not fix it, I used a dirty workaround to use a DNS entry and an alb to avoid using the aws native discovery. That is until it's fixed in consul itsef
Hello, comment for bumping up issue attention.
Overview of the Issue
I'm currently running EKS and using the official helm chart to deploy a consul cluster that would join an external cluster using the cloud autodiscover. I followed the AWS doc to modify the serviceaccount in order for it to map to an IAM role. However the consul ( version 1.8.0 ) ignores these credentials and tries to access the node metadata ( which fails because I set up the recommendation in order to block access to node instance profile from pods ) . Can someone help me around that please ?
the extra config from the pod
The env vars seem ok from the pod
The error logs from the pod
After installing AWS CLI on the pod the vars seem to work ( the assumed role matched the AWS_ROLE_ARN)
Operating system and Environment details