hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io/
Other
41.72k stars 9.42k forks source link

prevent_destroy should let you succeed #3874

Open ketzacoatl opened 8 years ago

ketzacoatl commented 8 years ago

Call me crazy, but I'm willing to call the current implementation of prevent_destroy a bug. Here is why: The current implementation of this flag prevents you from using it for 1/2 the use case.

The net result is more frustration when trying to get Terraform to succeed instead of destroying your resources.. prevent_destroy adds to the frustration more than alleviating it.

prevent_destroy is for these two primary use cases, right? 1) you don't want this resource to be deleted, and you want to see errors when TF tries to do that 2) you don't want this resource to be deleted, and you don't want to hear a peep out of TF - TF should skip over its usual prerogative to rm -rf on change.

I see no reason why TF must return an error when using prevent_destroy for the second use case, and in doing so, TF is completely ignoring my utterly clear directive to let me get work done. As a user, I end up feeling as though TF is wasting my time because I am focused on simple end goals which I am unable to attain while I spin my wheels begging TF to create more resources without destroying what exists.

You might say the user should update their plan to not be in conflict, and I would agree that is what you want to do in most cases.. but, honestly, that is not always the right solution for the situation at hand when using a tool like TF for the real-world. I believe in empowering users, and the current implementation of this flag prevents sensible use of the tool.

jen20 commented 8 years ago

Hi @ketzacoatl - thanks for opening this! Based on your description I'm certainly sympathetic to the idea that Terraform should not terminate with an error code if the user intent is to prevent resources being deleted, but I'm inclined to say that the output should indicate that resources where prevent_destroy was a factor in the execution should indicate this. @phinze, do you have any thoughts on this?

apparentlymart commented 8 years ago

Definitely sympathetic about this use-case too. I think a concern is that if Terraform fails to include one part of the update then that may have downstream impact in the dependency graph, which can be fine if you're intentionally doing it but would be confusing if Terraform just did it "by default".

Do you think having the ability to exclude resources from plan, as proposed in #3366, would address your use-case? I'm imagining the following workflow:

I'm attracted to this solution because it makes the behavior explicit while still allowing you to proceed as you said. It requires you to still do a little more work to understand what is failing and thus what you need to exclude, but once you're sure about it you only need to modify your command line rather than having to potentially rebuild a chunk of your config.

ketzacoatl commented 8 years ago

I'd agree @jen20, I am primarily looking for the ability to tell TF that it does not need to quit/error out hard. Same on @apparentlymart's comment on default behavior - I agree, this is a specific use case and not meant as a default.

Do you think having the ability to exclude resources from plan, as proposed in #3366, would address your use-case?

I had to re-read that a few times to make enough sense out of how that works (the doc addition helps: Prefixing the resource with ! will exclude the resource. - this is for the -target arg). That is close, but if my understanding is correct, no it would not get me through.

In my nuanced situation, I have an aws_instance resource, and I increased count, and modified user_data. I tell TF to ignore user_data, but #3555 is preventing that from working, and so TF wants to destroy my instance before creating additional ones. All I want is for TF to create more resources that fit the current spec, leaving the existing node alone (I'm changing user_data, just leave it be..) I would like to see the same if I change EBS volumes.

3366 is to use exclude with -target, which would have TF skip that resource.. which does not help when you want to modify/update the resource TF wants to destroy - TF wants to destroy a resource, and I want that resource both left alone, and included in the plan to apply.

When I found prevent_destroy in the docs, it sounded perfect, except it was clear that it would not work because it would throw an error if it ran into a situation where TF wanted to destroy, but prevent_destroy was enabled. I believe a user should be able to tell TF that hard error/exit can be skipped this time.

mrfoobar1 commented 8 years ago

Would it be possible to get an additional flag when calling: terraform plan -destroy [ -keep-prevent-destroy ]

I have the same problem, I have a few EIP associated with some instances. I want to be able to destroy every but keep the EIP for obvious reasons like whitelisting but I get the same kind of problem. I understand what destroy is all about, but in some cases it would be nice getting a warning saying this and that didn't get destroyed because of lifecycle.prevent_destroy = true.

@ketzacoatl exclude would be nice!

erichmond commented 8 years ago

+1, I need something along these lines as well.

Would #3366 allow you to skip destroying a resource, but modify it instead? My specific use case is that I have a staging RDS instance I want to persist (never be destroyed), but I want the rest of my staging infrastructure to disappear. As a side effect of the staging environment disappearing, I need to modify the security groups on the RDS instance, since it is being deleted.

So, if I had

Upon running "terraform destroy -force" I'd see:

phinze commented 8 years ago

Hey folks,

Good discussion here. It does sound like there's enough real world use cases to warrant a feature here.

What about maintaining the current semantics of prevent_destroy and adding a new key called something like skip_destroy indicating: any plan that would destroy this resource should be automatically modified to not destroy it.

Would something like this address all the needs expressed in this thread? If so, we can spec out the feature more formally and get it added.

ketzacoatl commented 8 years ago

@phinze, that sounds good, yes.. I'd hope that in most cases, TF would be able to let the apply proceed, and let the user flag some resources as being left alone/not destroyed, and your proposal seems to provide the level of control needed, while retaining sensible semantics.

erichmond commented 8 years ago

:+1: to what @ketzacoatl said

trmitchell7 commented 8 years ago

:+1: to what @phinze proposed. This would be very convenient for me right now. :)

chadgrant commented 8 years ago

I keep running in to this, I would like the ability for TF to only create if it does not exist and do not delete it. I would like to keep some ebs or rds data around and keep the rest of my stack as ephemeral (letting TF apply/destroy at will).

Currently been doing this with different projects/directories. But it would be nice to keep the entire stack together as one piece.

I too thought the prevent_destroy would not create an error and have been hacking my way around it quite a bit :(

tsailiming commented 8 years ago

:+1: to what @phinze said. During apply, I want it to be created but ignored during destroy. Currently, I have to explicitly define the rest of the targets just to ignore 1 s3 resource.

gservat commented 8 years ago

+1 - just ran into this. Another example are key pairs. I want to create (import) them if they don't exist, but if I destroy, I don't want to delete the keypair as other instances may be using the shared keypair.

Is there a way around this for now?

mrfoobar1 commented 8 years ago

Yes, split your terraform project into multiple parts.

Example:

Le dimanche 27 mars 2016, gservat notifications@github.com a écrit :

+1 - just ran into this. Another example are key pairs. I want to create them if they don't exist, but if I destroy, I don't want to delete the keypair as other instances may be using the shared keypair.

Is there a way around this for now?

— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/hashicorp/terraform/issues/3874#issuecomment-202054221

Sent from Gmail Mobile

cescoferraro commented 7 years ago

this is a must for me to be able to work with the digitalocean_volume

colutti commented 7 years ago

+1

mitchellh commented 7 years ago

Changing the flags here to enhancement, but still agree this is a good idea.

bbakersmith commented 7 years ago

+1

jbrown-rp commented 7 years ago

Is this being looked at? I can't imagine there are many use cases that would NOT benefit from it. One example is 'Anyone using key pairs ever'.

steve-gray commented 7 years ago

This is absolutely one of the banes of my life too. I've got dozens of resources I want to preserve from accidental overwrites - such as DynamoDB tables. A pair of flags for:

The flags could be something explicit like:

This would allow us to have the desired behaviour and only require an operator intervention in the case where the resource still exists, but is not mutatable into the target state during a terraform apply (i.e. If you've still got the same table, but the keys are now incompatible or some other potentially destructive update).

glasser commented 7 years ago

Here's the use case we'd like this for: we have a module that we can use either for production (where some resources like Elastic IPs should not be accidentally deleted) or for running integration tests (where all resources should be destroyed afterwards).

Because of #10730/#3116, we can't set these resources to be conditionally prevent_destroy, which would be the ideal solution. As a workaround, we'd be happy to have our integration test scripts run terraform destroy --ignore-prevent-destroy if that existed.

andyjcboyle commented 7 years ago

This would defintely be a useful feature.

I've been using terraform for less than a month and ran into this required feature in order to protect DNS Managed zone ... everything else in my infrastucture is transient but dealing with a new DNS zone comes with it computed ( potentially new ) Name Servers on what is a delegated zone, and this would introduce an unnecessary manual step to update the parent DNS managed zone - not to mention the DNS change time delay permeating making any auto testing have a much higher latency.

Reading above looks like the workaround is to split my project into different parts. Not sure I can pass in a resource from one project into another ... but I guess I can use variables in worst case scenario.

kaii-zen commented 7 years ago

I'm hitting a slightly different use case with Vault. I'm not 100% sure whether this belongs here. Might be best handled in the Vault resource itself.

Example:

resource "vault_generic_secret" "github_auth_enable" {
  path      = "sys/auth/github"
  data_json = "...some json..."
}

resource "vault_generic_secret" "github_auth_config" {
  path      = "auth/github/config"
  data_json = "...some json..."
  depends_on = ["vault_generic_secret.github_auth_enable"]
}

The problem is that the 'auth/github/config' path does not even support the delete operation: the entire 'auth/github' prefix gets wiped as soon as 'sys/auth/github' is deleted. Not only does this result in an error, but also a broken state: a subsequent apply would assume that key still exists.

mengesb commented 7 years ago

So my instance and issue would be things like rapid development and say docker_image / docker_container usage.

I set prevent_destroy = true on the docker_image resources because I don't want terraform deleting the image from my disk so that I can rapidly destroy/create and run through development. When I set that, now I have to use a fancy scripting method to execute my targeted destroy list to destroy everything BUT the docker_image resources:

TARGETS=$(for I in $(terraform state list | grep -v docker_image); do echo " -target $I"; done); echo terraform destroy $TARGETS

What I would like would be two methods. One that allows me to still succeed because the plan says "hey, don't destroy this", and if I am bold and say -force what I mean is "yeah... I said to not destroy it, but I'm forcing you to do it anyway... OBEY ME!"

bradenwright commented 7 years ago

Any update on this. This has been open for 1.5 years and it is not fun to try to organize terraform around this shortcoming.

The workaround for this is pretty ridiculous, I have a separate modules for "persistent", "ephemeral" in every project, but still need to use target or some way of skipping of and not running destroying modules that are persistent (or they spew errors).

HighwayofLife commented 7 years ago

Is there any work being done on this? It feels like this feature "prevent_destroy" is designed as "annoy you because you put this flag in if you want to destroy resources" rather than... destroy what I want to destroy except for the things I don't want to destroy, notated by the "prevent_destroy" flag. Use case 1 in the original post seems like a silly use case because it's only designed to alert you and error out. In reality, adding prevent_destroy on a resource actually seems to mean prevent destroy on your entire infrastructure, unless you want to muddle together a fancy, hacky script to create a series of target flags.

Not a fun way to manage infrastructure that needs to have persistent and non-persistent pieces.

ghost commented 6 years ago

Agree: terraform destroy -force -force

Harrison-Miller commented 6 years ago

To echo what @andyjcboyle said; when creating an aws_route53_zone you get a delegation set of 4 random name servers. I use the zone to define a subdomain, my domain is however not managed by terraform and I must insert the ns records manually. If I want to teardown my environment and then redeploy it (which I do often) I must manually reinsert the new name servers.

It would be much nicer if I could have lifecycle flag like ignore_destroy/skip_destroy that allowed everything else in the plan to be destroyed but not the marked resource.

This seems like a core ability that terraform is missing, I would really like to see this added soon.

cescoferraro commented 6 years ago

Hashicorp does not want this. It's the only explanation why it has not been implemented yet. On Mon, 17 Jul 2017 at 20:12 Verrazano notifications@github.com wrote:

To echo what @andyjcboyle https://github.com/andyjcboyle said; when creating an aws_route53_zone you get a delegation set of 4 random name servers. I use the zone to define a subdomain, my domain is however not managed by terraform and I must insert the ns records manually. If I want to tear down my environment and then redeploy it (which I do often) I must manually reinsert the new name servers.

It would be much nicer if I could have lifecycle flag like ignore_destroy/skip_destroy that allowed everything else in the plan to be destroyed but not that.

This seems like a core ability that terraform is missing, I would really like to see this added soon.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/hashicorp/terraform/issues/3874#issuecomment-315911357, or mute the thread https://github.com/notifications/unsubscribe-auth/AF-FE8OKlQq_eWj20CjsJon3Us47o4oFks5sO-pVgaJpZM4Gg173 .

apparentlymart commented 6 years ago

This feature was added originally more as a "prevent replace", to avoid accidentally changing a "forces new resource" attribute on a critical object.

Its current interaction with terraform destroy is not a critical part of that, so I think it would be reasonable to strike a compromise here:

I think as long as that warning is present there is no harm in allowing the operation to continue. Just need to make sure that Terraform explains its behavior since we don't want people to think they destroyed everything but still have something running.

glasser commented 6 years ago

That doesn't satisfy the use case in my comment above (which is honestly really an argument for the ability to set prevent_destroy conditionally rather than for this precise feature).

apparentlymart commented 6 years ago

Hi @glasser! I think setting this attribute dynamically from interpolation is a separate thing from how it behaves in response to terraform destroy. I've not thought deeply yet about what would need to happen for interpolations to be supported on prevent_destroy -- it's a bit different than the other lifecycle attributes in that it doesn't affect graph construction at all, so may have different constraints -- but I think that's covered by #10730 and orthogonal enough that these problems can be addressed separately.

For this issue, let's just focus on the problem that prevent_destroy effectively breaks terraform destroy. I think my proposal above deals with that, but I'm curious to hear if anyone sees any problems with that compromise.

bradenwright commented 6 years ago

@apparentlymart I'd say its better but no.

"If a diff contains a replacement this is a fatal error" seems like a deal breaker for me.

If something is marked "prevent_destroy" it should be skipped and a warning message displayed (both deleting or recreating). There also needs to be a way for this to be changed/overridden. Personally I'd rather what @glasser mentioned with having a conditional "prevent_destroy", so you can simply set it to a var and change the var, but I'd be open to "-force -force", etc.

As long as I don't need to break persistent/stateful pieces into a separate terraform module, its a pain!

HighwayofLife commented 6 years ago

Why can't the feature prevent_destroy attached to a specific resource do exactly as you would expect: Prevent destroy of that resource that it's attached to.

Adding an --ignore-prevent-destory flag would make sense. A redundant -force -force seems like a bad design to me.

apparentlymart commented 6 years ago

Hi all! Thanks for the feedback.

Having weighed the earlier discussion and the new feedback, I have a new proposal for consideration, which consists of two mostly-independent parts that I'll discuss separately below.


terraform destroy would retain its current behavior of failing when prevent_destroy resources are present, so that it remains consistent with its default mission of either destroying everything or returning an error.

In addition, it would gain a new option -force-all that causes its behavior to differ:

This allows for the use-case of spinning up a temporary dev/test environment that shares a config with a production environment (containing prevent_destroy resources) and then being able to entirely destroy that test environment without editing the config.


Issue #4149 describes a feature that includes, at its foundation, the idea of a "deferral", which is a sort of "anti-target" that excludes certain resources and their dependencies from the graph for a particular run.

We could then add a -defer-prevent-destroy option that would defer prevent_destroy resources and their dependencies from the graph for a particular run, allowing for the behavior of "destroy everything except the stuff that's prevent_destroy:

$ terraform destroy -defer-prevent-destroy
or
$ terraform plan -destroy -defer-prevent-destroy

It would also allow ignoring -/+ (replace) diffs to allow other updates to proceed without error:

terraform apply -defer-prevent-destroy

This allows the user to explicitly allow prevent_destroy resources to be ignored for destroy/update purposes, while allowing other unrelated resources to be updated.

Similarly to -target, this option is intended for manual use in exceptional circumstances, rather than for routine operations or orchestration. If there are different parts of a system that need to be routinely updated separately, splitting them into separate configurations is the best approach. Partial updates are bothersome in a collaborative environment since "how we got here" is no longer evident from the configuration changelog alone.


I think these two proposals together get the opt-in behavior that people are looking for here. The default of explicitly failing must be retained to preserve Terraform's promise of either entirely completing an action or exiting with an error if it cannot (and existing automation of Terraform depends on this) but these two opt-in mechanisms allow for some additional workflows that weaken that promise when requested.

glasser commented 6 years ago

That sounds great! I'd suggest that -override-prevent-destroy would be a more clear name than -force-all, especially since it seems like you'd want to still have the confirmation prompt unless -force is passed (ie, it's confusing if -force-all -force is different from -force-all!).

ketzacoatl commented 6 years ago

For me, the two key points have been to a) maintain TF's current behavior by default for normal day-to-day stuff, but to also b) give the user the option to ensure you can tell TF to continue with the update (but without removing what your don't want to remove) when you need it (infrequent, special circumstances).

I haven't been following the last few months+, but I think I follow @apparentlymart's proposal, and I think that fulfills the spirit of this request.

HighwayofLife commented 6 years ago

From the Resources Documentation

prevent_destroy (bool) - This flag provides extra protection against the destruction of a given resource. When this is set to true, any plan that includes a destroy of this resource will return an error message.

It seems the intent is to protect that particular resource from destruction, yet it actually protects against the destruction of the entire plan. Martin's proposal seems designed to just get around the (flawed?) design of the lifecycle option and introduce a series of workarounds to use the flag for it's intended purpose. Wouldn't it be better to change the design of the option to reflect the intended purpose as illustrated in the documentation?

Making -defer-prevent-destroy is okay, except that it just feels like a dirty workaround to make the prevent_destroy flag do what it's intended to do. If retaining current functionality is more important than changing the design of an option, perhaps it's better to add an additional lifecycle block option. Something like protect.

-force-all or -override-prevent-destroy still make sense. (The latter, I believe a bit more sense on what it does exactly.) But we should have a configuration option as well, which is why I suggest a new boolean for protect in the lifecycle block.

My suggestion would be to make protect protect the entire plan from destruction, and prevent_destroy prevent destruction of that resource, but allow the plan to otherwise proceed.

ckyoog commented 6 years ago

I just ran into this issue, and studied a lot of good discussions here.

@apparentlymart :+1: to your proposal. @glasser is right, if -force-all and -force appear together, it will be confusing.

@ketzacoatl it looks like this issue can't be closed very soon. So I have a suggestion for your case, which might benefit you before this issue is resolved.

My idea is to put your aws_instance config into a module, which name is, like ec2_inst, so in file ec2_inst/config.tf, write the aws_instance config like this

variable "inst_count" {
}

variable "user_data" {
}

...    #other variables(parameters of module)

resource "aws_instance" "ec2" {
  count = "${var.inst_count}"
  user_data = "${var.user_data}"
  ......
}

then write your config like this

module "aws_instance_batch1" {
  source = "./ec2_inst"
  inst_count = 3   #assume you create 3 ec2 at first time
  user_data = "user data"
  ...    #other arguments
}

when you want to add more ec2 with new user data, you can add config like this

module "aws_instance_batch2" {
  source = "./ec2_inst"
  inst_count = 2    #assume you create 2 ec2 this time
  user_data = "new user data"
  ...    #other arguments
}
lunderhage commented 6 years ago

I want to see three settings telling what to do for the prevent_destroy parameter:

no - do not prevent (default). stop - prevent with stopping error as of today. continue - prevent destroy and no not bother me about it.

true/false can be kept for backwards compatibility

No command line flags please, as it is up to the configuration to decide what should be done and not.

gregorskii commented 6 years ago

I agree with @lunderhage CLI options should not be needed. The configuration should be able to be setup in a way that skips over the protected resources. The final output of Terraform could warn that protected resources remain after a destroy if these resources are intended to be destroyed one can force, or remove the blocking config.

My goals are to make the number of arguments supplied to Terraform as simple as possible.

In the past, I have created separate terraform projects to protect things like root level DNS config and databases. It would be much easier if all of this config can remain in the same project.

matt-deboer commented 6 years ago

Another wrinkle to this discussion, I have a use-case where I want to combine create_before_destroy with prevent_destroy, resulting in skipping the cleanup of deposed resources (to be cleaned up later either manually, or via a different process)

I have some specific blue-green deploy scenario in mind where I'll keep the old version of an aws_autoscaling_group online for some amount of time to account for quick rollback.

I think the continue option mentioned by @lunderhage would support this...

connaryscott commented 6 years ago

i wish i never saw this post

If i import a resource such as an instance profile that is managed by another process such as cloudformation that this terraform template needs to refer to, i was thinking that this:

resource "aws_iam_instance_profile" "circleci_profile" {
    lifecycle {
        prevent_destroy = true
    }
}

would be the answer. If it's set to false (or not there), then terraform wants to delete it which breaks cloudformation. If i use this configuration, then i cannot delete it as I get this awful message:

* aws_iam_instance_profile.circleci_profile: aws_iam_instance_profile.circleci_profile: the plan would destroy this resource, but it currently has lifecycle.prevent_destroy set to true. To avoid this error and continue with the plan, either disable lifecycle.prevent_destroy or adjust the scope of the plan using the -target flag.

Is there any other recourse here?

gaui commented 6 years ago

prevent_destroy is too limited. If I want to tear down everything except my VPC/network setup I would have to mark the VPC/network resources with prevent_destroy = true and then utilise terraform destroy -target= to pinpoint down what I want to destroy. With @lunderhage continue option it would prevent destroy of the resources (without an error) and continue with the plan. It would be similar to a ignore_destroy option.

This is related to #2159

whereisaaron commented 6 years ago

I came looking for 'prevent_destroy' because I want to easily destroy everything disposable, except for select irreplaceable items like a couple EIP, S3, EBS, EFS resources. I want to be able to hit 'terraform destory' and then 'terraform apply' on everything, except a couple flagged, irreplaceable resources. You'd think (I thought - like @connaryscott and @HighwayofLife and others did) that was what 'prevent_destroy' on resources was for!

Instead it actually works as global flag that prevents you destroying anything because one resource is flagged. (Unless you jump through hoops and specifically mask items on the command line.) Was this the intended behavior of a per-resource flag? What is even the use case for that?

Now I don't dare import my irreplaceable resources, because there is no way to protect them without preventing 'terraform destroy' working at all.

I guess what we need now is something like:

terraform destroy --except-prevent-destroy-resources

But that really just exposes that the current behavior, as @ketzacoatl originally stated, is a bug. 😕

cdimitroulas commented 6 years ago

This is an issue which affects me as well - I think the discussion in here is very productive and agree with the opinions about not needing extra CLI flags to achieve the desired behaviour of prevent_destroy. The configuration should describe exactly how terraform destroy should work.

Is this issue on a roadmap for upcoming issues to deal with? It would be nice to have a rough idea of whether this is something that will be dealt with in the near future or not.

fcgravalos commented 6 years ago

Facing this issue too.

I want to be able to destroy everything but the VPCs and network stuff, so I don't break VPC peerings. We need something that allows us to not to destroy those resources and does not break terraform destroy for the rest of the infrastructure.

HighwayofLife commented 6 years ago

This issue has been open and affecting engineers for 2.5 years now. When is something going to be done about it?

whereisaaron commented 6 years ago

A lot of users have been circling this same issue for years, see #16392, #10730, #3116, #2253.

  1. prevent_destroy blocks planning, instead of excluding destruction of the flagged resource from the plan
  2. prevent_destroy blocks planning immediately, so you can't actually see the plan that is being blocked by it
  3. There is no cli, overrides file, or interpolation method to temporarily disable prevent_destroy to see what the plan would be.
  4. There is no cli, override, interpolation, way to exclude prevent_destroy source from planning, you instead have to --target every resource except the prevent_destroy ones. Pretty impractical if only a minority of resources are prevent_destroy - which I would guess is the most common use case.
  5. With the module system, a single prevent_destroy in a third party module can be pretty deadly to productivity.
JGailor commented 6 years ago

+1 here as well. I am a total newbie to Terraform and working through a lot of examples, and this was a surprise to me; the idea of the lifecycle prevent_destroy is great, but I assumed it would just skip those resources as part of the terraform destroy command. I imagine other people might be surprised as well.

ap1969 commented 6 years ago

Hi, Here's another use case: DNS entries.

I'm using a fairly long TTL in order to take advantage of DNS caching in the wider internet, and to help prevent any future issues due to a DDOS attack on my host/dns provider. The DNS entries are set up using the DigitalOcean provider, but I'm sure the same issues apply for other providers.

However, I would like to use the destroy/apply cycle to rebuild my environments if required. But if I destroy all resources, the DNS entry and virtual IP that they point to are also destroyed, so even if Terraform rebuilds my environments in 10 minutes, the site is unavailable until the DNS caches flush, 24 hours later.

At the moment, there is no way to easily get roundt this without manually destroying resources one-at-a-time, trying to avoid accidentally destroying the DNS entries, which increases the manual effort, increases chances of mistakes, and increases the time taken.

Prevent_destroy just stops me doing anything.

Regards, Andy