milliHQ / terraform-aws-next-js

Terraform module for building and deploying Next.js apps to AWS. Supports SSR (Lambda), Static (S3) and API (Lambda) pages.
https://registry.terraform.io/modules/milliHQ/next-js/aws
Apache License 2.0
1.46k stars 152 forks source link

[0.8.1] Error: AWS Secret Access Key not specified #121

Open liamross opened 3 years ago

liamross commented 3 years ago

I'm getting this error every time I run, regardless of export AWS_ACCESS_KEY_ID=xxxxxx. I'm not sure why, but even running it manually I get the same issue.

Error: Error running command './s3-put -r us-west-2 -T /some/path/to/project/.next-tf/static-website-files.zip /next-tf-deploy-source20210507060606072700000009/static-website-files.zip': exit status 1. Output: [2021-05-06T23:27:14-0700] Error: AWS Access Key ID not specified
Usage:
  s3-put [--debug] [-vip] [-k key] [-r region] [-s file] [-c content_type] -T file_to_upload resource_path
  s3-put -h
Example:
  s3-put -k key -s secret -r eu-central-1 -T file.ext -c text/plain /bucket/file.ext
Options:
  -c,--content-type MIME content type
     --debug    Enable debugging mode
  -h,--help Print this help
  -i,--insecure Use http instead of https
  -k,--key  AWS Access Key ID. Default to environment variable AWS_ACCESS_KEY_ID
  -p,--public   Grant public read on uploaded file
  -r,--region   AWS S3 Region. Default to environment variable AWS_DEFAULT_REGION
  -s,--secret   File containing AWS Secret Access Key. If not set, secret will be environment variable AWS_SECRET_ACCESS_KEY
  -t,--token    Security token for temporary credentials. If not set, token will be environment variable AWS_SECURITY_TOKEN
  -T,--upload-file  Path to file to upload
  -v,--verbose  Verbose output
     --version  Show version

When I run it manually:

./.terraform/modules/nextjs/modules/statics-deploy/s3-bash4/bin/s3-put -r us-west-2 -T /some/path/to/project/.next-tf/static-website-files.zip /next-tf-deploy-source20210507060516578200000002/static-website-files.zip
[2021-05-06T23:45:07-0700] Error: AWS Secret Access Key not specified
Usage:
  s3-put [--debug] [-vip] [-k key] [-r region] [-s file] [-c content_type] -T file_to_upload resource_path
  s3-put -h
Example:
  s3-put -k key -s secret -r eu-central-1 -T file.ext -c text/plain /bucket/file.ext
Options:
  -c,--content-type MIME content type
     --debug    Enable debugging mode
  -h,--help Print this help
  -i,--insecure Use http instead of https
  -k,--key  AWS Access Key ID. Default to environment variable AWS_ACCESS_KEY_ID
  -p,--public   Grant public read on uploaded file
  -r,--region   AWS S3 Region. Default to environment variable AWS_DEFAULT_REGION
  -s,--secret   File containing AWS Secret Access Key. If not set, secret will be environment variable AWS_SECRET_ACCESS_KEY
  -t,--token    Security token for temporary credentials. If not set, token will be environment variable AWS_SECURITY_TOKEN
  -T,--upload-file  Path to file to upload
  -v,--verbose  Verbose output
     --version  Show version
ofhouse commented 3 years ago

Did you also expose the AWS_SECRET_ACCESS_KEY variable to the terminal session? You need to expose both, the AWS_ACCESS_KEY_ID and the AWS_SECRET_ACCESS_KEY variables:

export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

You can check if they are available by print them using the echo command:

echo $AWS_ACCESS_KEY_ID
# => AKIAIOSFODNN7EXAMPLE
echo $AWS_SECRET_ACCESS_KEY
# => wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
liamross commented 3 years ago

They are both exposed yes, that's why it's a bug for me. I tried exporting them before as well as just sticking them in a .env file and sourcing it prior to running, in both cases echo ... returns the correct values, but the script seems to be missing them. I fixed it my manually adding a secret.txt to the module roots and manually adding:

 -k AKIAxxxxxxxxxxxxx -s secret.txt

to the script located in modules > statics-deploy > main.tf > null_resource.static_s3_upload > local-exec > command

liamross commented 3 years ago

If this is just on my end I can play around with uninstalling -> reinstalling stuff, but I don't think there are any peculiarities in my config other than adding profile to the global_region provider:

# Provider used for creating the Lambda@Edge function which must be deployed
# to us-east-1 region (Should not be changed)
# https://github.com/dealmore/terraform-aws-next-js
provider "aws" {
  region  = "us-east-1"
  alias   = "global_region"
  profile = local.profile      // <-- Maybe this is the issue?
}

I know profile stuff has messed up this package in the past, had to do a fork that accepted different profile versions but the recent changes seem to have fixed that, perhaps there are still some remaining issues though?

JT-Bruch commented 3 years ago

This occurs in the latest as well when running on Terraform Cloud.

What happens is that the environment variables that are present in the GH Actions are not being set as env variables in the terraform cloud machine, even if you use the *.auto.tfvars in the directory and provide the values.

The way I am attempting to get around it, is between the terraform plan & terraform apply, I am doing an API call to update the variables in terraform cloud for the workspace so that it sets them for the next run.

ofhouse commented 3 years ago

What happens is that the environment variables that are present in the GH Actions are not being set as env variables in the terraform cloud machine, even if you use the *.auto.tfvars in the directory and provide the values.

I think when running in Terraform Cloud, the local environment variables are not copied over to the remote machine when running terraform plan / apply. TF Cloud has its mechanism of setting environment variables through the UI: https://www.terraform.io/docs/cloud/workspaces/variables.html

We currently run our website with a GitHub Actions / TF Cloud workflow, and we only set the AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY in TF Cloud.