hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io/
Other
42.46k stars 9.51k forks source link

Do not delete a resource but create a new resource when change is detected #15485

Open Puneeth-n opened 7 years ago

Puneeth-n commented 7 years ago

Can terraform be configured to create a new resource but not delete the existing resource when it sees a change? For example with AWS step functions, one can either create or delete a state machine and not modify it.

I want terraform to create a new state machine each time it sees a change but not delete the old one as it might contain states.

crose-varde commented 2 years ago

We have a use case for this: We have some terraform configurations that manage both a resource and a CloudWatch log group that the resource logs to. If we ever want to change the name of the log group that the resource logs to, we can't just change it in the configuration, because a log group name changes forces a recreation, but our log groups are undeletable for audit reasons. To accomplish what we want we have to manually terraform state rm the log group before applying our changes. abandon_on_destroy is exactly what we need to avoid this manual step.

shridhargavai1 commented 2 years ago

There is a way. user

  1. Plan
  2. Apply
  3. terraform state rm "resource_name" --------This will eliminate or remove resource from current state
  4. next Apply Worked perfectly on GCP for creating 2 successive VM using same TF script. Only thing is we need to write/code to get current resources and store somewhere and create commands in #3. While destroying we can add back using terraform state mv "resource_name"

Note : This has risk as the very first VM did not get deleted as it is considered generated out of scope of terraform. So your cost may persist. So you have to have back up (incremental for states)

Hope this helps.

flovouin commented 1 year ago

I have another use case, similar but not identical to those that were presented here, which could be solved by something like abandon_on_destroy.

On GCP, I'd like to remove a BigQuery table from the Terraform state without deleting the actual underlying table, which would result in an unrecoverable loss of data. Setting any kind of lifecycle parameter would make it clear that I know what a destroy means, and that I do not want the actual data to be deleted. The entire process is part of CI/CD and running terraform state rm is not really an option. The reason behind this use case are tables storing events piped from Pub/Sub topics: when a topic is created, a BigQuery table and a Pub/Sub subscription are created at the same time by Terraform. When a Pub/Sub topic is deleted (after having been deprecated), the Pub/Sub subscription should also be deleted. However the BigQuery table should be kept around. No new data will be piped into the table, however the historical data is still relevant for analysis and auditing.

Please note that the google_bigquery_table has a deletion_protection argument that kinda interferes with the lifecycle (it has to be set to true in order for a - real - deletion to succeed). One could argue that my feature request should be implemented by the GCP provider, and I'd be fine with it. However it sounds like the deletion_protection argument is close to the prevent_destroy Terraform lifecycle argument (the documentation even states "a measure of safety against the accidental replacement of [...] database instances"). For me, this shows that the boundary between Terraform and the provider's responsibilities is not cristal clear. If I had to chose, I'd rather have the abandon_on_destroy behaviour implemented once by Terraform in a generic manner, rather than relying on each provider implementing it in its own way.

On a slightly different note, by browsing around I stumbled upon CloudFormation's DeletionPolicy, which looks like their solution to the need expressed in this issue. (I never used CloudFormation though, and could be completely wrong.)

mt-empty commented 1 year ago

I also want this. I wanted to automate GitHub repository creation, so I created a simple terraform script to create new GitHub repository. Problem is whenever I want to create a new repo I have to delete the old state.

rasmus-rudling commented 1 year ago

Any existing workarounds?

marocchino commented 1 year ago

I can't give you a code sample, but if you erase the difference of task definition after tf apply with jq or somewhat script, the plan's behavior will be as intended.

mvadu commented 1 year ago

adding another usecase, we use terraform to automate creating new stacks in Grafana cloud with the stack name passed as a variable. First time it creates a new stack and stores the details in the state. We don't want to destroy (thus lose all the data) for the second stack. Currently approach is a teardown stage to remove the references from state terraform state list | %{if($_ -match "new_stack|grafana_data_source|grafana_cloud_api_key"){terraform state rm $_}} but if there is a better less hacky way it would be preferred.

shridhargavai1 commented 1 year ago

Does additional resources are added only. On gcp i just add additional resources in same code and it deploys them keeping original state file as is and just adds new state.

Is this you looking for

On Tue, Apr 11, 2023, 10:59 PM Adarsha @.***> wrote:

adding another usecase, we use terraform to automate creating new stacks in Grafana cloud https://registry.terraform.io/providers/grafana/grafana/latest/docs#installing-synthetic-monitoring-on-a-new-grafana-cloud-stack with the stack name passed as a variable. First time it creates a new stack and stores the details in the state. We don't want to destroy (thus lose all the data) for the second stack. Currently approach is a teardown stage to remove the references from state terraform state list | %{if($_ -match "new_stack|grafana_data_source|grafana_cloud_apikey"){terraform state rm $}} but if there is a better less hacky way it would be preferred.

— Reply to this email directly, view it on GitHub https://github.com/hashicorp/terraform/issues/15485#issuecomment-1503814313, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2PFHSIUNKOCLILN6L3GUH3XAWIH3ANCNFSM4DR5LMAA . You are receiving this because you commented.Message ID: @.***>

aharo-lumificyber commented 1 year ago

Does anyone has any good idea to do this when you're building things within Azure?

spkane commented 7 months ago

I am adding this here, and it was in its own feature request but has been deemed to be a duplicate of this...


Use Cases

The core idea is to create a way to tell Terraform to remove a resource from the state file during a destroy workflow instead of contacting the owning API to delete the object.

This would make it possible to handle nested objects, like kubernetes_namespace resources, that exist inside a Kubernetes cluster, which you will destroy in the same workflow, but you do not need Terraform to remove via the owning API.

Attempted Solutions

Hashicorp would recommend that people use a single Terraform workflow to spin up a k8s cluster and then install things into that cluster in another Terraform workflow.

However, there are many times that at least some bootstrapping will occur in the initial Terraform workflow. This fix would allow users to quickly identify resources that need not be destroyed via their API.

Proposal

lifecycle { 
  # This would cause Terraform to remove the resource from the state file instead of calling the owning API to delete.
  state_only_destroy = true
}
darkn3rd commented 5 months ago

Any update? It's been 7 years?

alexeyinkin commented 5 months ago

Another use case for abandon_on_destroy.

I have Google Cloud Spanner instance created outside Terraform that should be permanent. With Terraform, I want to create a database in that instance and then drop the database in destroy (but keep the instance).

I need the instance in the configuration to refer to its properties, so I use import block, which is a declarative way to skip creation. If only for symmetry, we need a declarative way to skip the deletion too, which abandon_on_destroy is.

state rm before destroy is a poor substitute for my case because if destroy fails the instance will already be removed from the state. This brings tons of logical problems. The sheer concept of the reverse order of deletion says that state rm before destroy is wrong. The right way is to let destroy figure out the order of deletion (or abandoning).

Another idea for this use case is to add an option to import block to unimport the resource when destroying the configuration. This is even better symmetry but it does not allow for other use cases suggested for abandon_on_destroy here.

bcsgh commented 5 months ago

@alexeyinkin Your case, where the config doesn't actually manage a resource, seems like a a prime example of where to use a data "google_spanner_instance" ....

Now I was kinda expecting you were going to say you want Terraform to manage the configuration (i.e. all the knobs) but not the lifetime (creation/deletion) and I'd see that as a possible case but, from what you described, that doesn't seem to be your use case.

umesh07feb2022 commented 1 month ago

I'm creating AMI from the instances on which I'm deploying code and using that AMI for my launch template, When I'm creating AMI from another instance it destroys the previous AMI, In this case, I'm getting locked as if something goes wrong I can't roll out my previous AMI version, Is there any way in terraform so I can create new AMI while keeping older AMI's

Bombdog commented 1 day ago

This "state_only_destroy = true" flag would be the dogs bollocks for us. If you do frequent destroy and rebuild cycles you can preserve one or two resources and then actually pick them up again using an import{ } block. There's not only pet things like databases but sometimes you have infrastructure that simply wont go away. I can think of a few resources on GCP, but also I am working with Vault and a vault auth backend - it can only be emptied out on our setup, curiously it cant be deleted at all. So when you absolutely cannot delete something but you need to destroy everything else then this state_only_destroy = true idea is a winner as far as I'm concerned.