Open gwilym opened 6 years ago
I haven't had time to learn the Vault codebase yet, but I was able to whip up a workaround for this that others might find usable. It's based on the signing code present in the Vault CLI.
https://gist.github.com/gwilym/1db446f67a4d62db50d1139082e5b719
The output of this app should usable as part of a vault write
, like below (assuming you build it as vault-aws-login
):
$ AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=aws-profile-name vault-aws-login -server vault.example.com
# capture above using your preferred method, remembering that it may prompt / read for an MFA on stdin, and use it below, like ...
$ vault write auth/aws/login role=vault-profile-name $output
Do you have any update ? I still have the exact same problem
I've been reading this post a few times, whilst trying to solve a different issue, but then I read something that finally clicked - you have used source_profile in ~/.aws/credentials but according to the docs, it can only be used in the CLI config file ~/.aws/config
"Note that configuration variables for using IAM roles can only be in the AWS CLI config file."
Example configuration using source_profile:
# In ~/.aws/credentials:
[development]
aws_access_key_id=foo
aws_secret_access_key=bar
# In ~/.aws/config
[profile crossaccount]
role_arn=arn:aws:iam:...
source_profile=development
see https://docs.aws.amazon.com/cli/latest/topic/config-vars.html
As @kevinpgrant pointed out correctly, that ~/.aws/credentials
should not contain the source_profile
and/or role_arn
; instead, it belongs inside the ~/.aws/config
.
Nevertheless, I verified that the bug still exists and needs to be addressed.
@gwilym thanks a lot for the effort you already put in to investigate and come up with a workaround.
We haven't heard back regarding this issue in over 29 days. To try and keep our GitHub issues current, we'll be closing this issue in approximately seven days if we do not hear back regarding this issue. Please let us know if you can still reproduce this issue, and if there is any more information you could share, otherwise we'll be closing this issue.
Yes this is still an issue, the last comment was from yourself confirming this is still an issue?
Sorry, didn't mean to put the comment there. Too many open tabs, please ignore it.
I know this ticket is about MFA, but I am correct in thinking that it also currently isn't possible to use AWS CLI profiles which assume roles? Currently I'm working around this with the aws sts assume-role command and exporting various environment variables from the output of that.
Currently doing this:
AWS_ROLE=<role-arn-here>
CREDENTIALS=`aws sts assume-role --role-arn "$AWS_ROLE" --role-session-name vaultSession --duration-seconds 3600 --output=json`
export AWS_ACCESS_KEY_ID=`echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId'`
export AWS_SECRET_ACCESS_KEY=`echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey'`
export AWS_SESSION_TOKEN=`echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken'`
export AWS_EXPIRATION=`echo ${CREDENTIALS} | jq -r '.Credentials.Expiration'`
vault login -method=aws
@cablespaghetti I came to the exactly the same conclusion and same workaround.
@gw0 I actually stopped doing this as it was a horrible user experience. I now have a dockerised bash script running as a Kubernetes cron job which syncs up the members of an IAM Group with Vault, so they can log in as their normal user.
Still and issue for me. Would be nice to have this fixed.
I too am receiving this by logging into a central AWS account role that is used to assume role into ather AWS accounts.
I actually managed to make it work in docker-compose
(somewhat work), but I have a particular problem which drives me crazy.
First of all, I need to mention that I use this approach to simulate ECS agent behavior for the containers I have in docker-compose
: https://aws.amazon.com/blogs/compute/a-guide-to-locally-testing-containers-with-amazon-ecs-local-endpoints-and-docker-compose/
This essentially creates a credentials provider server at 169.254.170.2
which AWS SDK
, AWS CLI
, and supposedly vault
should fallback to grab credentials, if not found in envs / config files.
So inside the container I run this script:
if [[ ${IS_LOCAL_ENV} =~ (true) ]]; then
export VAULT_ROLE="<my role ARN>";
rm -rf ~/.aws
mkdir ~/.aws
echo "Running in local env. Assuming IAM Role ${VAULT_ROLE}";
aws sts assume-role \
--role-arn ${VAULT_ROLE} \
--role-session-name docker-compose-local > creds.json;
echo "Setting up AWS credentials..."
echo "[default]" > ~/.aws/credentials
echo "aws_access_key_id = $(cat ~/creds.json | jq -r '.Credentials.AccessKeyId')" >> ~/.aws/credentials
echo "aws_secret_access_key = $(cat ~/creds.json | jq -r '.Credentials.SecretAccessKey')" >> ~/.aws/credentials
echo "aws_session_token = $(cat ~/creds.json | jq -r '.Credentials.SessionToken')" >> ~/.aws/credentials
echo "[profile assumed]" > ~/.aws/config
echo "role_arn = $(cat ~/creds.json | jq -r '.AssumedRoleUser.Arn')" >> ~/.aws/config
echo "source_profile = default" >> ~/.aws/config
export AWS_PROFILE=assumed
else
echo "Running in AWS. Falling back to the associated IAM role.";
fi
echo "Using AWS Profile: ${AWS_PROFILE}"
echo "Running Vault login..."
vault login -method=aws -path=somepath -namespace=somens header_value=someaddress role=read-only
Now the problem:
for some reason, this last vault login
statement would fail, telling me my IAM user is not authorized, while it should be using assumed role ARN, yet it completely ignores what I've put in AWS_PROFILE
.
HOWEVER if afterward in the same container I would run exactly the same vault login
command outside of the script, or just put it in another script and execute - it will correctly pick-up AWS_PROFILE
value, resolve to the assumed role and finally issue login token.
I just cannot understand what causes vault login
to ignore AWS setup in the script above, yet makes it work in the mentioned cases.
UPDATE: found the root of the issue - the AWS_PROFILE
should export profile from ~/.aws/credentials
, NOT from ~/.aws/config
. It's kinda super confusing design, considering that ~/.aws/config
literally contains the word profile
and reference to the source of credentials, but it is what it is. Looking at the original question this could be exactly the same problem. To be fair this is the problem of AWS SDK / CLI confusing design, not Vault.
the relevant issue on the aws-sdk-go side: https://github.com/aws/aws-sdk-go/issues/3660
This is still an issue when attempting to use named AWS Profiles with the Vault CLI (v1.9.3). I am not entirely certain where the issue resides whether it be AWS or Vault. This fails regardless of MFA configuration. As the above have suggested setting the AWS environment variables is the work around. The following code snippet can be leverage as inspiration which grabs the credentials of the assume role process, sets appropriate environment variables, logs into Vault, reads for Database credentials, unsets the AWS environment variables, and lastly logs into a psql database.
#!/bin/bash
unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_EXPIRATION PGHOST PGDATABASE PGPASSWORD PGUSER
AWS_ROLE=$1
VAULT_ROLE=$2
DB_MOUNT_ROLE_PATH=$3
export PGHOST=$4
export PGDATABASE=$5
echo "Attempting AWS Assume Role for $AWS_ROLE"
CREDENTIALS=`aws sts assume-role --role-arn "$AWS_ROLE" --role-session-name vaultSession --duration-seconds 3600 --output=json`
if [ ! -z "$CREDENTIALS" ]
then
export AWS_ACCESS_KEY_ID=`echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId'`
export AWS_SECRET_ACCESS_KEY=`echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey'`
export AWS_SESSION_TOKEN=`echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken'`
export AWS_EXPIRATION=`echo ${CREDENTIALS} | jq -r '.Credentials.Expiration'`
echo "AWS Assume Role Credentials Successful"
else
echo "AWS Assume Role Credentials Failed"
exit
fi
if vault login -no-print -method=aws role=$VAULT_ROLE
then
echo "Vault Login Successful"
DB_CREDENTIALS=`vault read -format=json $DB_MOUNT_ROLE_PATH`
export PGPASSWORD=`echo ${DB_CREDENTIALS} | jq -r '.data.password'`
export PGUSER=`echo ${DB_CREDENTIALS} | jq -r '.data.username'`
unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_EXPIRATION
else
echo "Vault Login Failed"
unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_EXPIRATION
exit
fi
psql
Execution example
./db_vault_access.sh <assume_role_arn> <vault_binded_role> <mount_path_to_database_role_creds> <db_endpoint> <db_name>
Hope this helps!
This is still an issue. I would like to use my profiles in the AWS config file, and they use source_profile
with credential_process
.
I would be especially great if Vault agent could work with this... but I would probably file a new issue for that if it were fixed in the CLI.
Any news on this issue? I have the same problem but in my case, it is for Terraform.
NB: I was able to implement a workaround (inspired by previous examples) but looks pretty bad IMHO: https://gist.github.com/Westixy/bc70ee782fe759094bf5c1c65c248f6c
This is affecting me as well. I'm surprised to see this issue is 4 years old and we still can't set a source_profile
or role_arn
in the provider block. Thanks to @Westixy for the workaround as it unblocked me.
same issue here
Here's my version of @Westixy 's script. It doesn't write credentials onto the filesystem.
It also assumes that the AWS backend is configured to require the auth header that is set to the URL of Vault. This is the 3rd parameter, which is also the URL. You'll want to remove lines #12 and #29 if this is not applicable.
I use this workaround to enable the Vault Terraform provider to have a consistent config in environments where an EC2 instance or IRSA role can be used. This method assumes the selected role and stores the AWS credentials in environment variables. To use it, add the following function to your ~/.bashrc
or ~/.zshrc
:
# Usage: vault-aws-auth arn:aws:iam::123456789012:role/MyRole
vault-aws-auth() {
AWS_ROLE_ARN="$1"
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" \
$(aws sts assume-role \
--role-arn $AWS_ROLE_ARN \
--role-session-name vault \
--query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \
--output text))
}
After running vault-aws-auth
, you can authenticate to Vault using the aws login method:
vault login -method=aws header_value=${VAULT_ADDR}
For the Vault Terraform provider, auth_login_aws
does not work due to https://github.com/hashicorp/terraform-provider-vault/issues/1754. Instead, use the auth_login
config as follows:
provider "vault" {
address = var.vault_addr
auth_login {
path = "auth/aws/login"
method = "aws"
parameters = {
role = var.vault_role
header_value = var.vault_addr
}
}
}
I can confirm that the local aws profile is not being used at all.
I'm doing an AWS Role Anywhere setup which relies on credential_proccess
and aws_signin_helper
and I can confirm that it works because I've used the following code to test the default profile on the vault pod:
import (
"fmt"
"os"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
)
func main() {
if len(os.Args) < 2 {
fmt.Println("you must specify a bucket")
return
}
sess := session.Must(session.NewSession())
svc := s3.New(sess)
i := 0
err := svc.ListObjectsPages(&s3.ListObjectsInput{
Bucket: &os.Args[1],
}, func(p *s3.ListObjectsOutput, last bool) (shouldContinue bool) {
fmt.Println("Page,", i)
i++
for _, obj := range p.Contents {
fmt.Println("Object:", *obj.Key)
}
return true
})
if err != nil {
fmt.Println("failed to list objects", err)
return
}
}
I've tested with python too and in both cases, using default ProviderChain it worked.
Describe the bug
Attempting to
vault login
using this particular IAM profile setup in~/.aws/credentials
fails with the following error:The same setup works OK with the official
aws
CLI (AWS_PROFILE=admin aws sts get-caller-identity
works), as well as with basic usage of the Go SDK.Example of
~/.aws/credentials
:Note: account IDs above may be the same account, though for this case it likely doesn't matter because Vault fails during the credential-load stage. There's likely two potential issues here, one with credential loading and one with enabling an MFA token provider for the AWS SDK.
To Reproduce Steps to reproduce the behavior:
vault server -dev
with credentials to utiliseaws
authvault auth enable aws
vault write auth/aws/config/client iam_server_id_header_value=vault.example.com
vault write auth/aws/role/admin auth_type=iam 'bound_iam_principal_arn=arn:aws:sts::SUBACCOUNTID:assumed-role/admin/*' max_ttl=8h
AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=admin vault login -method=aws header_value=vault.example.com role=admin
Expected behavior
Environment:
vault status
): 1.0.0-beta2vault version
): Vault v1.0.0-beta2 ('8f61c4953620801477ad40f9d75063659acb5d84')Vault server configuration file(s):
None, I've been using
-dev
.Additional context
Apologies up front if I'm missing anything fundamental: I am brand new to Vault. If anything looks off here let me know and I will try to clarify.
When I modify Vault's
awsutil
package to enable verbose errors like so ...... I get the following extra info:
The EC2 errors are expected since I'm running this locally, however, the
admin
profile shouldn't need access keys within it due tosource_profile
. When I take the access keys and put them in the profile, then that prevents the role-switch from happening and it attempts to login using the original credentials instead (which is not expected).Not sure if this helps, but here's an example of a simple, working-as-expected Go AWS SDK usage: