Open tomasaschan opened 5 years ago
Hi @tomasaschan,
prevent_destroy
cannot support references like that, so if you are not seeing an error then the bug is that the error isn't being shown; the reference will still not be evaluated.
Just ran into this but with a "normal" variable. It would be create if we can use variables in the lifecycle
block because without using variables I'm literally unable to use prevent_destroy
in combination with a "Destroy-Time Provisioner" in a module.
I'm hitting this, too. Please allow variables derived from static values to be used in lifecycle
blocks. This would let me effectively use modules to run dev & test environments with the same config as prod, while providing deletion protection for prod resources. AWS RDS has a deletion_protection
option that is easy to set. S3 Buckets have an mfa_delete
option which is difficult to enable. I found no way to prevent accidental deletion of an Elastic Beanstalk Application Environment.
module "backend" {
source = "../backend"
flavor = "dev"
...
}
resource "aws_elastic_beanstalk_environment" "api_service" {
lifecycle {
prevent_destroy = (var.flavor == "prod") // <-- error
}
...
}
Seen multiple threads like this. There is an ongoing issue (https://github.com/hashicorp/terraform/issues/3116) which is currently open but @teamterraform seem to have made that private to contributors only. The need to set lifecycle properties as variables is required in a lot of production environments. We are trying to give our development teams control of their infrastructure whilst maintaining standards using modules. Deployment is 100% automated for us, and if the dev teams need to make a change to a resource, or remove it then that change would have gone through appropriate testing and peer review before being checked into master and deployed.
Our modules need to be capable of having lifecycle as variables. Can we get an answer as to why this is not supported?
My use case is very much like @weldrake13's. It would be nice to understand why this can't work.
I would also appreciate if Terraform allows variables for specifying "prevent_destroy" values. As a workaround, since we use the S3 backend for managing our Terraform workspaces, I block the access to the Terraform workspace S3 bucket for the Terraform IAM user in my shell script after Terraform has finished creating the prod resources. This effectively locks down the infrastructure in the workspace and requires a IAM policy change to re-enable it.
I write tests for my modules. I need to be able to re-run tests over and over. There's no way for me to delete buckets in a test account and set protection in a production account. Swing and a miss on this one.
Is there a general issue open with Terraform to improve conditional support? Off the top of my head I can think of the following limitations:
provider =
argument cannot use conditionalsAll of these make writing enterprise-level Terraform code difficult and more dangerous.
The same of: https://github.com/hashicorp/terraform/issues/3116 Can you close, please?
The same of: #3116 Can you close, please?
Hashicorp locked down 3116. If this gets closed then those following cant view the issue.
It's over 4 years since #3116 was opened, I think we'd all appreciate some indication of where this is? Is it still waiting on the proposal mentioned in this comment, #4149 ?
Thought I'd offer up a work around I've used in some small cases. Example here is a module for gcloud sql instance, where obviously in production I want to protect it, but more ephemeral environments I want to be able to pull the environment down without editing the code temporarily.
It's not pretty but it works, and is hidden away in the module for the most part:
### variables.tf
variable "conf" {
type = map(object({
database_version = string
...
prevent_destroy = string
}))
description = "Map of configuration per environment"
default = {
dev = {
database_version = "POSTGRES_9_6"
...
prevent_destroy = "false"
}
# add more env configs here
}
}
variable "env" {
type = string
description = "Custom environment used to select conf settings"
default = "dev"
}
### main.tf
resource "google_sql_database_instance" "protected" {
count = var.conf[var.env]["prevent_destroy"] == "true" ? 1 : 0
...
lifecycle {
prevent_destroy = "true"
}
}
resource "google_sql_database_instance" "unprotected" {
count = var.conf[var.env]["prevent_destroy"] == "false" ? 1 : 0
...
lifecycle {
prevent_destroy = "false"
}
}
### outputs.tf
output "connection_string" {
value = coalescelist(
google_sql_database_instance.protected.*.connection_name,
google_sql_database_instance.unprotected.*.connection_name,
)
description = "Connection string for accessing database"
}
Module originated prior to 0.12, so those conditionals could well be shortened using bool
now. Also I appreciate this is one resource duplicated, and it would be much worse elsewhere for larger configurations.
It is so funny. I am asking this question WHY? WHY?
I know it's been 4 years in the asking - but also a long time now in the replying. Commenting on #3119 was locked almost 2 years ago saying "We'll open it again when we are working on this".
Can someone with the inner knowledge of this "feature" work please step up and give us some definitive answers on simple things like:
Thanks for your work - Hashicorp - this tool is awesome! Not slanting at you, just frustrated that this feature is languishing and I NEED it ... Now....
@Penumbra69 and all the folks on here: I hear you, and the use cases you're describing totally make sense to me. I'm recategorizing this as an enhancement request because although it doesn't work the way you want it to, this is a known limitation rather than an accidental bug.
Hi team
Maybe a duplicate of https://github.com/hashicorp/terraform/issues/3116 ?
@danieldreier given that Hashicorp has acknowledged this issue as a "known limitation" based on your June 12, 2020 comment, is the company able to provide a standard or recommended workaround to address this?
I think the recommended workaround is find-and-replace value before running terraform :(
I think the recommended workaround is find-and-replace value before running terraform :(
Wow this is a real problem so either we duplicate all resources with prevent_destroy, you we use m4 or something to do a search for this (like you have to do with Dockerfiles. pretty ugly :-)
Ahh i tried using dynamic didnt work either. There is another issue that could be similar to this one at https://github.com/hashicorp/terraform/issues/24188
makes the prevent_destroy pretty painful to use with a dev/prod env for multiple resources. Would like to use it with an assertion something like:
locals {
is_prod = replace(lower(var.environment), "prod", "") != lower(var.environment)
}
module "prod_lock" {
source = "rhythmictech/errorcheck/terraform"
version = "1.0.0"
assert = local.is_prod == false || var.prevent_destroy == true
error_message = "you are potentially attempting to destroy prod. Please stop."
}
resource "null_resource" "really important" {
lifecycle {
prevent_destroy = var.prevent_destroy
}
depends_on [
module.prod_lock
]
}
I'm in dire need of this functionality support. I'm having to find and replace instead of just setting it from a local or a var. I'm not sure why this is such a point of contention to get working but I'm sure there's more to it than what I can see, so any additional information on the progress or lack thereof would be greatly appreciated!
we're also looking forward to use variables in lifecycle blocks.
example:
set prevent_destroy to true per default ... overwrite it in our dev environment on a case by case basis
Does using deletion_protection
circumvent this issue in restrictive lifecycle blocks? https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance#deletion_protection
Just ran across this as well. How disappointing. Maybe my "+1" comment will encourage Hashicorp to re-evaluate this.
Does using
deletion_protection
circumvent this issue in restrictive lifecycle blocks? https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance#deletion_protection
Hardly. That's a feature of one very specific resource. prevent_destroy
is applicable to any Terraform resource.
Looks like this is still an issue.
I am still facing this issue in 2021. Any suggested workarounds
As a possible workaround resources can be duplicated and conditionally created based on a variable:
resource "aws_s3_bucket" "example" {
count = var.prevent_destroy ? 1 : 0
bucket = "example"
lifecycle {
prevent_destroy = true
}
# ...
}
resource "aws_s3_bucket" "example_no_prevent_destroy" {
count = var.prevent_destroy ? 0 : 1
bucket = "example"
lifecycle {
prevent_destroy = false
}
# ...
}
+1 for requested feature and to keep this alive.
I know it's (probably?) more complicated than it sounds but I almost spit out my coffee when I read ctrl-f
.
Wow! This came as a surprise that using a variable for prevent_destroy
is not supported! Makes complex environments super difficult!
Duplicating all resources isn't exactly a great solution with large environments and editing code temporarily exposes all instances of a resource to accidental deletion rather than specific instances of a resource. Neither of these are actually great workarounds. πΏ +1 for this feature
+1
Just a reminder to please use the π reaction on the original post to upvote issues - we do sort by most upvoted to understand which issues are the most important. This also reduces "noise" in the notification feed for folks following this issue. Thanks!
We have the same needs. For us make sense to have differents lifecycle - prevent_destroy = [true|false] for differents envirnoments.
@teamterraform
For example:
workspace = production or develop
life_cycle = {
production = "true"
develop = "false"
}
prevent_destroy = lookup(local.life_cycle , terraform.workspace)
If we're thinking in a very large companys, it's not maintainable to do massive updates in that statment for delete develop or test environments, and undo de change again for set production environment protected. Also, while test environment is deleted, production environment it's unprotected.
There are more chance for human error -> Anyone can forget to set that statement to true again after changes.
We hope for answers or changes in a nearly future.
Thank you so much in advance.
Is there any progress on this issue? We need to be able to control prevent_destroy
for different target environments, and don't really want to duplicate code.
This feature would be very useful. For example, Oracle cloud doesn't support deletion prevention. So we have to duplicate all the code.
+1 I've just run in this issue, please Hashicorp fix it.
I need to run terratest so with the prevent_destroy = false
but in prod I want the property to true
+1
+1, want to protect prod databases from deletion while allowing dev environments to freely delete their databases. Having to manually set prevent_destroy = false
on dev environments creates the risk of setting prevent_destroy = false
in prod.
+1, any news?
π₯ π π΅ π¬ just slammed into this brick wall as well. Any π coming soon Hashicorp ?
Just came across this too :(
I was able to work around this by creating per-environment override files which are copied into place as part of the deployment pipeline. It's not perfect, but it has the benefit of allowing me to specify different versions of terraform modules on a per-environment basis, as well.
Example directory layout:
β― tree
.
βββ Jenkinsfile
βββ README.md
βββ iam.tf
βββ locals.tf
βββ main.tf
βββ overrides
βΒ Β βββ dev
βΒ Β βΒ Β βββ rds_override.tf
βΒ Β βββ qa1
βΒ Β βββ prod
βΒ Β βββ uat
βββ rds.tf
βββ route53.tf
βββ variables.tf
βββ versions.tf
βββ vpc.tf
I can't share the script that copies the files, but it uses the find
command to find any override files under a certain path and the exec
flag to copy them to the desired path.
Would also like to see support for livecycle.prevent_destroy = var.A_STATIC_VAR
. My use-case was inside a module that uses the Github provider. My module supports figuring out if the user specified an existing repo or the repo needs to be created. I want to default this to "true", but permit users to override it with variables to the module for ephemeral environments.
WHY?!? 4 years to fix such a small issue!? WHY?? It was requested by so many people!
I face it still with Terraform v1.3.2 in 2022... really dissapointed
WHY?!? 4 years to fix such a small issue!? WHY?? It was requested by so many people!
Without having looked at the code, fixing such "small" issues might actually cascade into a massive amount of codebase rewrite, if hitting architectural limits. I'm pretty sure this is the case here, otherwise it would have been supported from the get-go π
but more ephemeral environments I want to be able to pull the environment down without editing the code temporarily.
^This. Please, this is really frustrating.
+1 Has Hashicorp given any reasoning as to why they're not fixing this?
+1
Terraform Version
Terraform Configuration Files
Steps to Reproduce
terraform init
Description
The documentation notes that
so while I'm bummed that this doesn't work, I understand that I shouldn't expect it to.
However, we discovered this behavior because running
terraform init
failed where it had once worked. And indeed, if you comment out the variable reference in the snippet above, and replace it withprevent_destroy = false
, it works - and if you then change it back it keeps working.Is that intended behavior? And will it, if I do this workaround, keep working?
Debug Output