hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io/
Other
42.38k stars 9.49k forks source link

Add support for lifecycle meta-argument in modules #27360

Open elliott-weston-cko opened 3 years ago

elliott-weston-cko commented 3 years ago

Current Terraform Version

Terraform v0.14.3

Use-cases

Terraform currently only allows the lifecycle meta-argument to be used within the declaration of a resource. It would be really useful if users were able to specify lifecycle blocks in modules that can then be applicable to some/all of the resources within that module.

The main use-case I have is being able to use the ignore_changes to instruct terraform to ignore changes to resources or particular attributes of resources.

Proposal

For example, lets assume I create a terraform module to be used in AWS, and as part of that module I create a dynamodb table. DynamoDB tables (among other resources) have the ability to autoscale, the autoscaling configuration is defined by another resource. Consequently, a lifecycle block must be used to prevent the resource that creates the dynamodb table from modifying the read/write capacity.

In this scenario I currently have to choose to either to support autoscaling or to not support autoscaling, as I cannot pass define a lifecycle block with the ignore_changes argument. Ideally, I'd like to be able to do something like this:

module "my-module" {
  source = "./my-module/"
  name = "foo-service"

  hash_key = "FooID"
  attributes = [
    {
      name = "FooID"
      type = "S"
    }
  ]
  lifecycle {
    ignore_changes = [
      aws_dynamodb_table.table.read_capacity,
      aws_dynamodb_table.table.write_capacity
    ]
  }
}

Being able to apply lifecycle blocks similarly to the way shown above, would enable me to manage the attributes of this resource outside of this module (whether that's via some automated process, or another resource/module definition), and would allow more people to use this module as it would be usable for a wider range of use-cases.

The documentation states that the lifecycle block can only support literal values, I'm unsure if my proposal would fall under that, as its referring to resources (and possibly attributes) that are created within the module itself 🤔

References

jleloup commented 3 years ago

I am also interested by such feature though it would be to use a prevent_destroy lifecycle directive.

jaceklabuda commented 3 years ago

It will be very useful with AWD RDS module.

module "db" {
  source = "terraform-aws-modules/rds/aws"
  ...
  snapshot_identifier = "..."
  password = "..."

  lifecycle {
    ignore_changes = [
      snapshot_identifier,
      password
    ]
  }
  ...
 }
ibacalu commented 3 years ago

This would be a great feature, especially now that Terraform Modules support for_each

rjcoelho commented 3 years ago

My main use case is prevent_destroy on DDB and S3, both persistent end-user data that I want to preserve against the accidental replacement of objects

Shocktrooper commented 3 years ago

Good addition as more and more people are starting to use modules like resources so being able to use the lifecycle block on the module level would be amazing

chancez commented 3 years ago

It feels like having lifecycle blocks support dynamic configuration in general would be better than adding support for lifecycle blocks I modules. It would mean modules wouldn't need special support for this, and instead vars and custom logic could be used to set different lifecycle options on resources inside the module (ensuring you can encapsulate the logic, which the approach suggested in this ticket doesn't allow for).

nitmatgeo commented 3 years ago

Hi @antonbabenko, any possibility considering below request? https://github.com/hashicorp/terraform/issues/28913

jbcom commented 3 years ago

This would also be incredibly helpful for preventing things from ever being part of a destroy

ChristianPresley commented 3 years ago

Please add this functionality. Modules are severely limited if you can't use lifecycle metadata when calling them.

rumeshbandara commented 3 years ago

This will be a very useful feature instead of editing the module definition to support lifecycle ignores

openPablo commented 3 years ago

I also ran into issues with this using aminueza/terraform-provider-minio

DevOpsJon commented 3 years ago

+1 was really shocked to discover this isn't a module level thing.

movergan commented 3 years ago

+1 super key feature for resources like KMS

devpikachu commented 3 years ago

+1, absolutely necessary feature, especially to prevent deletion of certain resources, as others have mentioned above.

OGProgrammer commented 3 years ago

+1 just ran into this and also shocked it's not here. If I had the time, I'd see about contributing this change. My use case is just like @jaceklabuda has but for the engine_version since I have auto update on.

module "rds" {
  source = "terraform-aws-modules/rds/aws"
  ...
  engine_version = "5.7.33"

  lifecycle {
    ignore_changes = [
      engine_version
    ]
  }
  ...
 }
antonbabenko commented 3 years ago

@OGProgrammer You can set engine_version = "5.7" instead of "5.7.33" in the RDS module you are using. This will prevent it from showing a diff every time the patch version is updated. aws_db_instance docs for engine_version

ghost commented 3 years ago

Just sharing my experience here, in case it helps :) if you do not set the complete version (major + minor and patch number), AWS always offers the latest patch number. That means, if the version is set to 5.7 and at the time of deployment, latest version offered by aws is 5.7.30 and that is installed, the next time you deploy the same package and if the AWS offering is 5.7.35 (like new patches published), Terraform will show a diff and and applying the changes usually leads to an outage, unless you have set scheduled maintenance windows (which prevents patch upgrades). So, also, I think setting exact versions are better than ignoring them via lifecycle block, because it can make troubleshooting easier. It is best that patch versions used in the code are updated on a regular maintenance periods.

aidan-mundy commented 2 years ago

A barebones implementation of the prevent_destroy for modules should prevent destruction of the module itself (via a terraform destroy command), not destruction of resources inside it.

Additional work to allow resource specific lifecycles within the module, or to prevent all resources in the module from being destroyed would be nice as well, but I don't see them as immediately essential.

BHSDuncan commented 2 years ago

In case it helps: This would also be helpful for blue/green deployments where there's a 50% chance of the primary listener having its default_action updated with the wrong target group (in the case of having two TGs). Namely in the terraform-aws-modules/alb/aws module. Using the module beats having to manage several different TF resources.

stephenh1991 commented 2 years ago

For anyone who encounters this issue and wants to protect module resources, we were able to find a bit of a hacky but workable solution within a wrapper module using:

resource "null_resource" "prevent_destroy" {
  count = var.prevent_destroy ? 1 : 0

  depends_on = [
    module.s3_bucket ## this is the official aws s3 module
  ]

  triggers = {
    bucket_id = module.s3_bucket.s3_bucket_id
  }

  lifecycle {
    prevent_destroy = true
  }
}

So far it seems to be a 1 way flag which can't be turned off but works well to protect buckets where content recovery would be a lengthy & disruptive task.

nlitchfield commented 2 years ago

We also could really do with this feature. We have a reasonably extensive library of terraform modules wrapping everything from EC2 instances to application stacks. Taking the EC2 module as an example we use a data source like the example from the docs to supply a "latest" ami at build time

data "aws_ami" "example" {
  most_recent = true

  owners = ["self"]
  tags = {
    Name   = "app-server"
    Tested = "true"
  }
}

Most infrastructure is immutable so a later AMI results in a recreation of any EC2 instances sourced from the module, but some infra we'd like to use ignore_changes for the AMI like you might with a resource. This proposal would make achieving that much easier.

speller commented 2 years ago

I want to parametrize the create_before_destroy value of an EC2 instance in a module. Modules for different use cases will have different create_before_destroy behavior.

boopzz commented 2 years ago

I currently stand up Azure infrastructure, namely vnets and subnets through a module. We have custom route tables that have Next Hop Ip set. With deployed NVA's like fortigates and F5's they change the next hop IP dependant on which device is currently active. When a pipeline is run it sees the changes and reverts. Not every scenario is like this and would rather place a lifecycle in the module pull for a specific repo rather than in the module for all.

reify-tanner-stirrat commented 2 years ago

I just ran into this with a custom module that we're using to wrap AWS. We see a lot of noise around secret values, and it'd be nice to tell it not to worry about it.

BHSDuncan commented 2 years ago

Has there been any movement or thought about this from the devs? I still can't find a workaround for the primary listener of an ALB having its default_action updated in-place with the wrong target group (in the case of having two TGs), using one of the AWS TF modules. (If anyone knows of a workaround I'd be very interested--I don't think preventing destruction works, though.)

mldevpants commented 2 years ago

Hmm, I had to use some kind of workaround in order ec2 instances would not be replaced by dynamically provided AMI id, but I don't like it, the use of modules is very spread and lifecycle is pretty basic, I tried to use some public domain modules (terraform-aws-modules by @antonbabenko - Kudos for his hard work) but it seems it lacks some basics which we use on-premise, surely in our modules we use lifecycle inside a resource in a module and it is not provided as an argument to a module. I would like to see ignore_changes be provided dynamically through the module.

khushboomittal34 commented 2 years ago

It has been a while since this was opened, is there any workaround or ETA on getting this? My usecase is to be able to ignore desired_size config of eks managed node group scaling config so that it does not tear down my nodes as per desired size whenever there's an update to the infra config. While I can do it through https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/eks_node_group#ignoring-changes-to-desired-size, there is still no way of doing this if am using aws module to create eks cluster.

thiagolsfortunato commented 2 years ago

Any news?

gnokoheat commented 2 years ago

It is necessary to extend the universal use of the modules.

zamialloy commented 2 years ago

+1 this would make life so much better

glaracuente commented 2 years ago

+1 this would be very helpful

Pwd9000-ML commented 2 years ago

+1 Very needed and will be extremely useful!

crw commented 2 years ago

Just a reminder to please use the 👍 reaction on the original post to upvote issues - we do sort by most upvoted to understand which issues are the most important. This also reduces "noise" in the notification feed for folks following this issue. Thanks!

jpratt3000 commented 2 years ago

@crw is there a threshold needed to get this enhancement going? this issue has been open a while

crw commented 2 years ago

We have been maintaining a list of the Top 25 most-upvoted/commented issues. It isn't obvious from the changelog but we have been picking off a Top 25 issue (usually more than one) per release. This issue is attached to one of the Top 25 (often a few issues are thematically or systematically adjacent), but I have no update on it beyond that.

dharada1 commented 2 years ago

Here is similar issue with 132 :+1: upvotes. https://github.com/hashicorp/terraform/issues/21546

If two issues are summed up, they will rise to top 10. https://github.com/hashicorp/terraform/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc

yerbo1 commented 1 year ago

+1 for the important feature.

nsvijay04b1 commented 1 year ago

+1 for feature

sebastianrogers commented 1 year ago

+1 for the feature

We use Azure Policy to set tags at a resource Group level so we want to tell Terraform to ignore all changes to these tags at a resource level.

For example:

module "my-module" {
  source = "./my-module/"
  name = "foo-service"

  lifecycle {
    ignore_changes = [
      tags["CostCenter"],
      tags["ProjectName"]
    ]
  }
}
rhysjtevans commented 1 year ago

+1 for feature

DamianArmitage commented 1 year ago

We also could really do with this feature. We have a reasonably extensive library of terraform modules wrapping everything from EC2 instances to application stacks. Taking the EC2 module as an example we use a data source like the example from the docs to supply a "latest" ami at build time

data "aws_ami" "example" {
  most_recent = true

  owners = ["self"]
  tags = {
    Name   = "app-server"
    Tested = "true"
  }
}

Most infrastructure is immutable so a later AMI results in a recreation of any EC2 instances sourced from the module, but some infra we'd like to use ignore_changes for the AMI like you might with a resource. This proposal would make achieving that much easier.

My use case is exactly this. Please implement this feature

crw commented 1 year ago

Thanks for your enthusiasm on this feature request! To respect the GitHub notifications queue of everyone on this thread, please use the 👍 emoji on the original issue description to indicate your support for this issue. Please avoid using "+1" comments as they spam everyone else on the thread. Thanks for your consideration!

gruckion commented 1 year ago

Hi Team, what is the priority of this item? I don't see any mentions about it being on the backlog or road map.

This is a very important feature as some resources take quite some time to replace and replacing the database is not good in production.

It's been 2 years.

chefcai commented 1 year ago

The most we have @gruckion is what @crw said above: top 25 is on their hit list. There is no official roadmap from what I know.

Upvote and hope.

abdulalloy commented 1 year ago

+1 this would make using modules more easier and flexible.

crw commented 1 year ago

@gruckion @chefcai This is correct, this specific issue not on the roadmap for 1.4 at this time but it is on our radar as a top issue.

PauloColegato commented 1 year ago

+1 for the feature

We use Azure Policy to set tags at a resource Group level so we want to tell Terraform to ignore all changes to these tags at a resource level.

For example:

module "my-module" {
  source = "./my-module/"
  name = "foo-service"

  lifecycle {
    ignore_changes = [
      tags["CostCenter"],
      tags["ProjectName"]
    ]
  }
}

@sebastianrogers change the policy effect to modify, add a sys managed identity with rights to change tags back if changed, and walk away, jobs a good un ;-)

sebastianrogers commented 1 year ago

Hi @PauloColegato we do that but

  1. We are doing automatic drift detection, so this means every single Azure Resource below Resource Group level is reported as having drifted from its definition by Terraform.
  2. The SOC notices unplanned changes to Azure Resources and flags them up as suspicious activity.
  3. Some of these tags have owners who are allowed to change them, our governance makes this explicit, so in this case when Terraform is run it changes them back again.

In essence Tags are managed by Azure Policy and not Terraform we need to be able to simply tell Terraform that it is not concerned with them at all.

Hope this makes clear why you suggestion does not result in 'bish, bosh, job's a good'un' but rather 'don't worry just use even more gaffer tape' :)

PauloColegato commented 1 year ago

Hi @PauloColegato we do that but

  1. We are doing automatic drift detection, so this means every single Azure Resource below Resource Group level is reported as having drifted from its definition by Terraform.
  2. The SOC notices unplanned changes to Azure Resources and flags them up as suspicious activity.
  3. Some of these tags have owners who are allowed to change them, our governance makes this explicit, so in this case when Terraform is run it changes them back again.

In essence Tags are managed by Azure Policy and not Terraform we need to be able to simply tell Terraform that it is not concerned with them at all.

Hope this makes clear why you suggestion does not result in 'bish, bosh, job's a good'un' but rather 'don't worry just use even more gaffer tape' :)

@sebastianrogers i do love the gaffer tape approach!

There is another policy that tags resources, resource groups have a separate policy.

Remove the config that resources inherit from RG, deploy resource policy same as RG policy, and watch people get bored changing resource tags as it will change em right back (they can add additional tags no problem)!

Just remember to tag your resources, or each TF run will wipe, which is a drawback of this.

Lastly tell you SOC to amend their alerts, and shhhhhhhhhhhhhhh, its only tags :)

nlitchfield commented 1 year ago

" Lastly tell you SOC to shhhhhhhhhhhhhhh its only tags :)"

You can only take that approach if you don't use tags for sensitive purposes (what application is the associated with, who is the owner, is this resource subject to patching) etc.

On Fri, Nov 18, 2022 at 8:30 AM PauloColegato @.***> wrote:

Hi @PauloColegato https://github.com/PauloColegato we do that but

  1. We are doing automatic drift detection, so this means every single Azure Resource below Resource Group level is reported as having drifted from its definition by Terraform.
  2. The SOC notices unplanned changes to Azure Resources and flags them up as suspicious activity.
  3. Some of these tags have owners who are allowed to change them, our governance makes this explicit, so in this case when Terraform is run it changes them back again.

In essence Tags are managed by Azure Policy and not Terraform we need to be able to simply tell Terraform that it is not concerned with them at all.

Hope this makes clear why you suggestion does not result in 'bish, bosh, job's a good'un' but rather 'don't worry just use even more gaffer tape' :)

@sebastianrogers https://github.com/sebastianrogers i do love the gaffer tape approach!

There is another policy that tags resources, resource groups have a separate policy.

Remove the config that resources inherit from RG, deploy resource policy same as RG policy, and watch people get bored changing resource tags as it will change em right back!

Just remember to tag your resources, or each TF run will wipe, which is a drawback of this.

Lastly tell you SOC to shhhhhhhhhhhhhhh its only tags :)

— Reply to this email directly, view it on GitHub https://github.com/hashicorp/terraform/issues/27360#issuecomment-1319693100, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABA4PVYLVOWASHBVCXAU35LWI45DLANCNFSM4VH7MFUQ . You are receiving this because you commented.Message ID: @.***>

-- Niall Litchfield Oracle DBA http://www.orawin.info