hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io/
Other
41.67k stars 9.41k forks source link

Resource node has no configuration attached when using "apply planfile" #21515

Closed cappetta closed 4 years ago

cappetta commented 5 years ago

I'm seeing 2 issues in CI after upgrading to v0.12. Not sure if they are both related to the apply graph builder but sharing the CICD build link

I believe this is reproduced by terraform plan -out <file> then terraform apply <outfile> - I've reproduced it locally, then updated my circleci config file to eliminate the apply with a normal apply --auto-approve and the build is passing again

` Error: Resource node has no configuration attached

The graph node for module.staging-infrastructure.module.secdevops.aws_instance.sc[0] has no configuration attached to it. This suggests a bug in Terraform's apply graph builder; please report it! `

And this error about a non-empty state:

` Error: orphan resource module.staging-infrastructure.module.secdevops.data.aws_ami.seed_ubuntu1604 still has a non-empty state after apply; this is a bug in Terraform

unbkbl commented 5 years ago

I have the same problem in a CI/CD pipeline running in AzureDevops. I use the Terraform task from https://marketplace.visualstudio.com/items?itemName=ArkiaConsulting.terraform-extension

When I run a terraform plan and terraform apply from my laptop I don't that problem

I'm also using v0.12

`Error: orphan resource module.dc_uat.module.dataprovider-engine-publisher.azurerm_function_app.function still has a non-empty state after apply; this is a bug in Terraform

Error: orphan resource module.dc_uat.module.uefa-integration.azurerm_app_service.app_service still has a non-empty state after apply; this is a bug in Terraform`

apparentlymart commented 5 years ago

Hi @cappetta, @unbkbl! Sorry for these strange errors, and thanks for reporting them.

If either of you are able to reproduce this when running Terraform with the environment variable TF_LOG=trace set, it'd be helpful to be able to see that full trace log (via a gist, because traces are too long/verbose for github comments) so we can understand better how Terraform got itself into that "should never happen" situation.

cappetta commented 5 years ago

No worries, glad to share the trace logging - here's a gist

apparentlymart commented 5 years ago

Thanks @cappetta! We'll take a deeper look at this soon.

The first thing that seems strange to me is that the error message is talking about an orphan resource but the address it gives is for a data resource (with data. prefix) rather than a managed resource, and that is curious because the idea of "orphan" (in this sense, at least) is supposed to apply only to managed resources that have been removed from the configuration since last apply.

I suspect (but have not yet confirmed) that the main error here is the one about the node having no configuration attached, and then this other error is showing up because the first error prevented Terraform from properly processing the data resource. We'll see when we debug further whether that is accurate.

andresvia commented 5 years ago

@apparentlymart I have uploaded a second trace thanks.

The error seems similar to the one the OP is reporting, but is probably something different, let me know if you want me to create a separate issue.

2019/05/30 21:45:16 [ERROR] module.uw2.module.core.module.config.module.live: eval: *terraform.EvalForgetResourceState, err: orphan resource module.uw2.module.core.module.config.module.live.data.aws_caller_identity.live still has a non-empty state after apply; this is a bug in Terraform

2019/05/30 21:45:16 [ERROR] module.ue1.module.core: eval: *terraform.EvalDiff, err: Reference to undeclared module: The configuration contains no module.ue1.module.core.module.config.

2019/05/30 21:45:16 [ERROR] module.ue1.module.core: eval: *terraform.EvalSequence, err: Reference to undeclared module: The configuration contains no module.ue1.module.core.module.config.

It is also reproducible by attempting to apply a plan file, and also gets solved by apply with -auto-approve

apparentlymart commented 5 years ago

Hi @andresilva! Thanks for reporting that. Indeed, it does seem like something a little different, so let's open a new issue for that one and we can try to debug it separately.

jogleasonjr commented 5 years ago

Hi @apparentlymart

I am experiencing the same issue deploying a small piece of Azure infrastructure. Here's a gist of the output. Happy to provide any other information you need.

The auto-approve switch didn't circumvent the problem, but skipping the out parameter did.

simoncocking commented 5 years ago

I've raised an issue (which appears to be a dupe of this one) with full trace/plan outputs:

https://github.com/hashicorp/terraform/issues/21624

bohdanyurov-gl commented 5 years ago

Also affects us with dozens of projects. Only one module is affected though, I'm going to check what exactly leads to this error.

I can also confirm that omitting plan argument (doing plan again on apply) helps.

bohdanyurov-gl commented 5 years ago

I've found root cause for one of our modules:

variable "ansible_variables" {
  type    = map(string)
  default = []

  description = "Map of additional ansible variables"
}

->

variable "ansible_variables" {
  type    = map(string)
  default = {}

  description = "Map of additional ansible variables"
}

@apparentlymart After I've changed this input variable to correct default value issue has gone (for this particular module). I am curious why this single line has broken whole module subtree state in plan.

bohdanyurov-gl commented 5 years ago

I've managed to find another reason in second module: duplicated outputs. Removing duplicated one fixed whole module. Looks like terraform somehow hides the real error message + destroys whole submodule plan.

markmsmith commented 5 years ago

I'm seeing this problem as well. You can reproduce it using the RDS Postgres example found here: https://github.com/terraform-aws-modules/terraform-aws-rds/tree/master/examples/complete-postgres
I raised an issue on that project, but it seems like it may actually be a terraform 0.12 bug, since converting the same example to 0.11 syntax and running it with terraform 0.11.14 works correctly. You can see my example with log output referenced from the issue here: https://github.com/terraform-aws-modules/terraform-aws-rds/issues/134

poolski commented 5 years ago

I also have this issue when upgrading from 0.11.13 to 0.12.3. Nothing changed in our infrastructure but a terraform apply returns a string of this.

This isn't an apply from a file, just a CLI terraform apply with no other extras.

...
Error: orphan resource module.development.module.service1.aws_launch_configuration.this still has a non-empty state after apply; this is a bug in Terraform

Error: orphan resource module.development.module.service2.aws_autoscaling_group.this still has a non-empty state after apply; this is a bug in Terraform

Error: orphan resource module.development.module.service3.aws_launch_configuration.this still has a non-empty state after apply; this is a bug in Terraform

Error: orphan resource module.development.module.service4.aws_launch_configuration.this still has a non-empty state after apply; this is a bug in Terraform
...
mfortin commented 5 years ago

Also observing the same. Using Terraform 0.12.3 Was working fine with terraform 0.11.14

Error: orphan resource module.cluster.module.batch_environment.aws_iam_role_policy_attachment.batch-instance-policy-role-attachment still has a non-empty state after apply; this is a bug in Terraform

Error: Resource node has no configuration attached

The graph node for
module.cluster.module.batch_environment.aws_iam_role_policy_attachment.batch-spot-fleet-service-policy-attachment[0]
has no configuration attached to it. This suggests a bug in Terraform's apply
graph builder; please report it!

Error: orphan resource module.cluster.module.batch_environment.aws_iam_role.batch-instance-role still has a non-empty state after apply; this is a bug in Terraform

Error: Resource node has no configuration attached

The graph node for
module.cluster.module.batch_environment.aws_batch_job_queue.cloudos-batch-spot-queue[0]
has no configuration attached to it. This suggests a bug in Terraform's apply
graph builder; please report it!

Error: orphan resource module.cluster.module.batch_environment.aws_iam_role_policy_attachment.batch-service-policy-attachment still has a non-empty state after apply; this is a bug in Terraform

Error: orphan resource module.cluster.module.batch_environment.aws_batch_compute_environment.cloudos-batch-env still has a non-empty state after apply; this is a bug in Terraform

Error: orphan resource module.cluster.module.batch_environment.aws_iam_role.batch-service-role still has a non-empty state after apply; this is a bug in Terraform

Error: Resource node has no configuration attached

The graph node for
module.cluster.module.batch_environment.aws_launch_template.batch_launch_template_spot[0]
has no configuration attached to it. This suggests a bug in Terraform's apply
graph builder; please report it!

Error: orphan resource module.cluster.module.batch_environment.aws_launch_template.batch_launch_template still has a non-empty state after apply; this is a bug in Terraform

Error: Resource node has no configuration attached

The graph node for
module.cluster.module.batch_environment.aws_iam_role.batch-spot-fleet-service-role[0]
has no configuration attached to it. This suggests a bug in Terraform's apply
graph builder; please report it!

Error: orphan resource module.cluster.module.batch_environment.aws_iam_instance_profile.batch-instance-role-profile still has a non-empty state after apply; this is a bug in Terraform

Error: Resource node has no configuration attached

The graph node for
module.cluster.module.batch_environment.aws_batch_compute_environment.cloudos-batch-env-spot[0]
has no configuration attached to it. This suggests a bug in Terraform's apply
graph builder; please report it!

Error: orphan resource module.cluster.module.batch_environment.aws_batch_job_queue.cloudos-batch-queue still has a non-empty state after apply; this is a bug in Terraform

Error: orphan resource module.cluster.module.batch_environment.aws_iam_policy.batch-instance-policy still has a non-empty state after apply; this is a bug in Terraform

Error: orphan resource module.cluster.module.batch_environment.aws_iam_role_policy_attachment.batch-instance-role-policy-attachment still has a non-empty state after apply; this is a bug in Terraform

All of my resources in the resources above in the module are using a count to enable/disable creation.

This is the chunk of code creating this resource:

variable "batch_compute_environment" {
  type="map"
  default = {
    "enabled" = true
  }
}

resource "aws_iam_instance_profile" "batch-instance-role-profile" {
  count = "${var.batch_compute_environment["enabled"] ? 1 : 0}"
  name = "${join("",aws_iam_role.batch-instance-role[*].name)}"
  role = "${join("",aws_iam_role.batch-instance-role[*].name)}"
}

This is expected:

$ terraform state list | grep batch-instance-role-profile
module.cluster.module.batch_environment.aws_iam_instance_profile.batch-instance-role-profile[0]

Here is what the state has:

$ terraform state show module.cluster.module.batch_environment.aws_iam_instance_profile.batch-instance-role-profile
No instance found for the given address!

This command requires that the address references one specific instance.
To view the available instances, use "terraform state list". Please modify
the address to reference a specific instance.
$ terraform state show module.cluster.module.batch_environment.aws_iam_instance_profile.batch-instance-role-profile[0]
# module.cluster.module.batch_environment.aws_iam_instance_profile.batch-instance-role-profile[0]:
resource "aws_iam_instance_profile" "batch-instance-role-profile" {
    arn         = "arn:aws:iam::ACCOUNTID:instance-profile/batch-instance-role"
    create_date = "2019-01-01T00:00:00Z"
    id          = "batch-instance-role"
    name        = "batch-instance-role"
    path        = "/"
    role        = "batch-instance-role"
    roles       = [
        "batch-instance-role",
    ]
    unique_id   = "UNIQUEID"
}
sroze commented 5 years ago

Same is happening here with 0.12.3. terraform apply -auto-approve made it work (vs terraform plan --out X && terraform apply X)

mfortin commented 5 years ago

thanks @sroze this helped me figure out 2 bugs I had in my terraform module. I now only get the Error: orphan resource module.cluster.module.batch_environment.resource.name still has a non-empty state after apply; this is a bug in Terraform when I run terraform plan --out PLAN && terraform apply PLAN

alexpekurovsky commented 5 years ago

Sharing my solution: https://github.com/hashicorp/terraform/issues/21624#issuecomment-509141859

The issue is in the module, just Terraform doesn't tell exactly where it is and fails all resources

DMonCode commented 4 years ago

Ran into same issue with Terraform Enterprise. I then tried to run the job from my work station. If I run using terraform plan -out MYPLAN and then use terraform apply MYPLAN it fails. If I run terraform apply it succeeds.

ghost commented 4 years ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.