Open hnsvill opened 3 years ago
While reading my question again, I can't help but think, oh dear, what have I done?
I've been able to run the container locally and it works now with passing environment variables. I just needed to reorder the arguments. docker run --env AWS_ACCESS_KEY_ID --env AWS_REGION --env AWS_SECRET_ACCESS_KEY -p 9000:8080 hello-lambda
The other missing piece was that the function worked when run locally, but not when actually deployed in the lambda.
What fixed it was changing the construction of my SNS client I was using. It ran locally and in a container run locally while I was pulling in the credentials as environment variables like this:
my_config = Config(
region_name = os.environ['AWS_REGION']
)
botoSNSClient = boto3.client(
'sns',
aws_access_key_id=os.environ['AWS_ACCESS_KEY_ID'],
aws_secret_access_key=os.environ['AWS_SECRET_ACCESS_KEY'],
config=my_config
)
I had gotten here by looking at the boto3 docs and the AWS Lambda Environment Variable Docs.
Because inquiring minds want to know!
While debugging this, I even added admin privileges to the execution role and changed the lambda function to print the assumed AWS_ACCESS_KEY_ID
to the CloudWatch logs. The access key ID listed in the policy document I can see by looking in the Resource Summary section does not match what's printed in the logs. Why are the keys different?
When the lambda function is invoked, is there a docker run
happening in the background each time?
Thank you -
Hannah
So do we have to use exports? lambci/docker-lambda
could handle a user mounting ~/.aws:/home/sbx_user1051/.aws
and this worked wonderfully.
Hello -
Could some documentation be added on how to give the image access to environment variables?
This works locally
Testing locally, running the file I have with my function in it works as expected, and it gets the
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
,AWS_REGION
and other environment variables just fine.When I hardcode the variables in the function and build an image using the base image from this repo, run the image locally, and invoke the function with the command in the docs for testing images locally, the function works exactly as expected.
The real issue
As soon as I locally try a
docker run -p 9000:8080 hello-lambda --env AWS_ACCESS_KEY_ID --env AWS_REGION --env AWS_SECRET_ACCESS_KEY
, I get the following message: "entrypoint requires the handler name to be the first argument."Trying different formats of the above result in the same message.
Things I've tried
I've also tried pushing the image and building the lambda in the console from the image, thinking maybe when pushed, at least the
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
,AWS_REGION
might somehow be added to the container when AWS is running the image, but when I test the lambda in my account, from the logs it's clear it wasn't added at least as an environment variable. I used environment variables as specified in these docs. Even if there was some magic happening when AWS runs the image that gets those variables in there, I'd still need a way to put other environment variables in the container.I'll note I also tried adding the environment variables through the console and apparently the way the AWS keys I'm trying to get into the container are already added from the execution role, and I tried hitting the "Test" button from an Admin role.
I've also tried a bunch of other things inspired from Docker docs that I don't think would be so helpful here. ¯_(ツ)_/¯
Where I'm at in this journey
What I think may be happening is that the last line of this base image
ENTRYPOINT ["/lambda-entrypoint.sh"]
changes how the image is actually run and requires treating the image differently.I'd really like to just have my variables set up in a separate file that makes it so when I do a
docker run <...>
the container can access them, and I can override that during the docker run with the same kind of command I shared above:docker run -p 9000:8080 hello-lambda --env AWS_ACCESS_KEY_ID --env AWS_REGION --env AWS_SECRET_ACCESS_KEY
.Thank you for your help -
Hannah (She/Her)