cattle-ops / terraform-aws-gitlab-runner

Terraform module for AWS GitLab runners on ec2 (spot) instances
https://registry.terraform.io/modules/cattle-ops/gitlab-runner/aws
MIT License
581 stars 328 forks source link

Invalid count argument error forces local_file.public_ssh_key to be deployed first #176

Closed Glen-Moonpig closed 2 years ago

Glen-Moonpig commented 4 years ago

I had just updated from commit 16d2e2b556641ff67771c3c47b6c81a07f03a677 to 26310fbf2287c4087ba16f2d6835b962bd46430b and tried a fresh deployment and found that it errors:

Error: Invalid count argument on .terraform/modules/runner/main.tf line 4, in resource "aws_key_pair" "key": 4: count = var.ssh_key_pair == "" && var.ssh_public_key != "" ? 1 : 0 The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the count depends on.

I had to deploy the local_file resource for the SSH key using the -target argument before I could deploy the terraform-aws-gitlab-runner module. This was not previously an issue. I can see the count expression has changed from count = var.ssh_key_pair == "" ? 1 : 0 to count = var.ssh_key_pair == "" && var.ssh_public_key != "" ? 1 : 0

I am using terraform 0.12.19.

nhooey commented 4 years ago

@Glen-Moonpig Could you paste the exact invocation you used to get this working?

Glen-Moonpig commented 4 years ago

@Glen-Moonpig Could you paste the exact invocation you used to get this working?

You just need to use the -target argument when you run terraform/terragrunt so that only the local_file resource is deployed (to the local terraform cache folder). Once the file is created you can run the same terraform command without the target argument to deploy all resources.

For example:

terraform apply -target=local_file.public_ssh_key
lsorber commented 4 years ago

Ran into the same issue, am I using this module in an unintended way or is this a bug?

npalm commented 4 years ago

It is a bug, but I thinking about removing the 3 different ways to set the SSH key. The current way is far complex.

I would like to support a user provided reference to an existing EC2 key only, removing the part where te module is generating a key. Besides using an SSH key, the session manager is supported as well.. Anyone a suggestion?

npalm commented 4 years ago

I am planning to drop the support of an SSH EC2 key managed by the module. To access the instances you can either inject an ec2 key pair or use the Amazon sessions manager, see #192 for the PR. Please feel free to comment on this PR

kayman-mk commented 2 years ago

@npalm SSH support has been dropped with #389. Issue seems to be obsolete now and can be closed. Can't it?

npalm commented 2 years ago

@kayman-mk thx for the reminder