Closed lpossamai closed 1 year ago
@lpossamai had the same issue, fixed by storing builds folder on plan step and restore it on apply step.
This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days
I have the same issue with my gitlab project containing a lambda + 3 layers. I'm using the standard template: https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Terraform.gitlab-ci.yml
I have the same issue with my gitlab project containing a lambda + 3 layers. I'm using the standard template: https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Terraform.gitlab-ci.yml
My workaround: use the hashicorp/archive
provider and set the packages directory where archives are stored as a cache in my .gitlab-ci.yml
:
cache:
key: "${TF_ROOT}"
paths:
- "${TF_ROOT}/.terraform/"
- "${TF_ROOT}/packages/"
locals {
package_dir = "${path.cwd}/packages"
}
data "archive_file" "main" {
type = "zip"
source_file = "${path.cwd}/src/index.py"
output_file_mode = "0666"
output_path = "${local.package_dir}/main.zip"
}
module "lambda_function" {
source = "terraform-aws-modules/lambda/aws"
version = "~> v4.16.0"
create_package = false
ignore_source_code_hash = false
handler = "index.lambda_handler"
runtime = "python3.7"
local_existing_package = "${local.package_dir}/main.zip"
...
}
@lpossamai had the same issue, fixed by storing builds folder on plan step and restore it on apply step.
Hi @MichielBijland Are you able to share some of the solution here, please? The build file the lambda creates is a .zip
file but the name is random, so it's hard to specify it using the actions/upload-artifact@v3
action.
@lpossamai Sure, we use an s3 bucket to store plans and build artefacts using merge requests flows.
plan job
- name: Store terraform plan
run: aws s3 cp --sse AES256 --recursive --exclude '*' --include "deployments/*/terraform.plan" --include "deployments/*/builds/*.plan.json" . s3://${{ env.AWS_PLAN_BUCKET }}/plans/${{ github.event.pull_request.number }}/
apply job
- name: Retrieve terraform plan
run: aws s3 cp --sse AES256 --recursive s3://${{ env.AWS_PLAN_BUCKET }}/plans/${{ github.event.pull_request.number }}/ .
- name apply terraform
- name: Cleanup terraform plan
run: aws s3 rm --recursive s3://${{ env.AWS_PLAN_BUCKET }}/plans/${{ github.event.pull_request.number }}/
Paths might be different in you own workflow
This issue has been automatically marked as stale because it has been open 30 days with no activity. Remove stale label or comment or this issue will be closed in 10 days
This issue was automatically closed because of stale in 10 days
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Description
I have the same code deployed to different accounts, using Github Actions. I've implemented the solution described here, but am getting the following error, as Github cannot find the package.
Versions
Module version [Required]:
v4.10.1
Terraform version:
1.3.9
Provider version(s):
Reproduction Code [Required]
My pipeline runs
terraform plan
, uploads the plan file to Github and in another moment, when the PR is merged, Github downloads that plan file and runsterraform apply
.I don't know if that process is the reason I see this error:
##[debug]module.lambda_function.null_resource.archive[0] (local-exec): FileNotFoundError: [Errno 2] No such file or directory: './.terraform/lambda-builds/package_dir/NotificationHandler-dev/dev/aeb134742ebf21621c1345f9560df33ade546d008e322876157bea4e1ebdf5f7.plan.json'
?