Closed jcarta closed 4 years ago
docker run
allows you to pass in env variables using -e
: https://docs.docker.com/engine/reference/run/#env-environment-variables
So you can pass in AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
when you run the container.
You can also share your local AWS config in the container – though this has lower fidelity with running a live Lambda (which uses env vars for credentials).
Eg:
docker run --rm \
-v "$HOME"/.aws:/home/sbx_user1051/.aws:delegated,ro \
-e AWS_PROFILE=myprofile \
-e AWS_SDK_LOAD_CONFIG=1 \
lambci/lambda:nodejs12.x
You can test whether this loads your credentials or not without invoking your Lambda:
docker run --rm \
-v "$HOME"/.aws:/home/sbx_user1051/.aws:delegated,ro \
-e AWS_PROFILE=myprofile \
-e AWS_SDK_LOAD_CONFIG=1 \
-w /var/runtime \
--entrypoint node \
lambci/lambda:nodejs12.x \
-e 'new (require("aws-sdk").S3)().listBuckets().promise().then(console.log)'
When you input your AWS credentials, is the actual (deployed) lambda function being run or is it still the local version?
Still local. It's just allowing the lambda function that's running locally to use those credentials
One of my lambda functions makes calls to and from an S3 bucket. How would I go about testing this locally on my machine?