cattle-ops / terraform-aws-gitlab-runner

Terraform module for AWS GitLab runners on ec2 (spot) instances
https://registry.terraform.io/modules/cattle-ops/gitlab-runner/aws
MIT License
586 stars 331 forks source link

terraform apply in gitlab CI/CD is different from local: No such file or directory #1133

Closed mxxnseat closed 3 months ago

mxxnseat commented 5 months ago

Describe the bug

I use 7.6.1 version of cattle-ops/gitlab-runner/aws. I roll out gitlab runner from my laptop with no issues to do so I made

terraform init \
    -backend-config="address=https://gitlab.com/api/v4/projects/project_id/terraform/state/$TF_STATE_NAME" \
    -backend-config="lock_address=https://gitlab.com/api/v4/projects/project_id/terraform/state/$TF_STATE_NAME/lock" \
    -backend-config="unlock_address=https://gitlab.com/api/v4/projects/project_id/terraform/state/$TF_STATE_NAME/lock" \
    -backend-config="username=username" \
    -backend-config="password=$GITLAB_ACCESS_TOKEN" \
    -backend-config="lock_method=POST" \
    -backend-config="unlock_method=DELETE" \
    -backend-config="retry_wait_min=5"

terraform apply -target "module.gitlab_runner"

OK...

The reason for -target flag is I use gitlab_runner module inside another module for kafka

module "gitlab_runner" {
  source = "../../modules/gitlab"

  name        = "name"
  region      = var.region
  environment = var.environment
  vpc_id      = var.vpc_id
  subnets     = var.gitlab_subnets
}

module "kafka" {
 # configuration
}

After that I push my commit to repository and run tagged job for my runner and I get

╷
│ Error: reading ZIP file (builds/lambda_function_9de860b79aae19cab2bd00759173d6ad23a6f563194f6e9b2acef79608a49066.zip): open builds/lambda_function_9de860b79aae19cab2bd00759173d6ad23a6f563194f6e9b2acef79608a49066.zip: no such file or directory
│ 
│   with module.gitlab_runner.module.runner.module.terminate_agent_hook.aws_lambda_function.terminate_runner_instances,
│   on .terraform/modules/gitlab_runner.runner/modules/terminate-agent-hook/main.tf line 20, in resource "aws_lambda_function" "terminate_runner_instances":
│   20: resource "aws_lambda_function" "terminate_runner_instances" {

My terraform configuration for gitlab-runner

data "aws_security_group" "default" {
  name   = "default"
  vpc_id = var.vpc_id
}

data "aws_region" "current" {
  name = var.region
}

module "runner" {
  // https://registry.terraform.io/modules/cattle-ops/gitlab-runner/aws/latest
  source  = "cattle-ops/gitlab-runner/aws"
  version = "7.6.1"

  environment = "gitlab-${var.environment}"

  vpc_id    = var.vpc_id
  subnet_id = element(var.subnets, 0)

  runner_gitlab = {
    url                                           = "https://gitlab.com"
    preregistered_runner_token_ssm_parameter_name = "token"
  }

  runner_instance = {
    name                        = "${var.name}-gitlab-docker-default"
    spot_price                  = "on-demand-price"
    collect_autoscaling_metrics = ["GroupDesiredCapacity", "GroupInServiceCapacity"]
    ssm_access                  = true
  }

  runner_worker_docker_services_volumes_tmpfs = [{
    volume  = "/var/lib/mysql",
    options = "rw,noexec"
  }]

  runner_worker_docker_volumes_tmpfs = [
    {
      volume  = "/var/opt/cache",
      options = "rw,noexec"
    }
  ]

  runner_networking = {
    security_group_ids = [data.aws_security_group.default.id]
  }

  runner_worker_docker_options = {
    privileged = true
    volumes    = ["/certs/client"]
  }

  runner_worker_docker_machine_autoscaling_options = [
    {
      periods    = ["* * 0-9,17-23 * * mon-fri *", "* * * * * sat,sun *"]
      idle_count = 0
      idle_time  = 60
      timezone   = "America/New_York"
    }
  ]

  tags = merge(local.tags, {
    "tf-aws-gitlab-runner:example"           = "runner-default"
    "tf-aws-gitlab-runner:instancelifecycle" = "spot:yes"
  })
}

To Reproduce

Steps to reproduce the behavior:

  1. Deploy gitlab-runner from local machine
  2. Run plan and apply commands in the gitlab CI/CD

Do I do something wrong?

UPDATE: My colleague ran the same command on them own computer and this issue disappeared, the question why?

tmeijn commented 5 months ago

question: are you running the plan and apply in different jobs? terraform plan generates the lamda zip, which terraform apply needs, so if you do not pass down the artifact to the apply job, terraform apply will not find the artifact and error out.

mxxnseat commented 5 months ago

question: are you running the plan and apply in different jobs? terraform plan generates the lamda zip, which terraform apply needs, so if you do not pass down the artifact to the apply job, terraform apply will not find the artifact and error out.

Yes I run terraform plan in different job, but I create artifact and pass it to apply job

github-actions[bot] commented 3 months ago

This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 15 days.