Closed t3ranservice closed 1 week ago
Just noticed that everything works if I pass the same environment variable to destroy as to apply (-var environment=preprod). It does not make sense to me since it looks into state files?
Hi @t3ranservice, thanks for filing this. The destroy operation performs an internal refresh first, to ensure that the plan output contains the most up-to-date information about exactly what will be destroyed. I think this explains the behaviour you are seeing.
I wonder if running terraform destroy -refresh=false
might work for you without specifying the variable? I believe this is working as intended so I will close this ticket. Thanks again!
@liamcervante Hello Liam, thank you for quick response. I don't believe this is normal behavior. Why would it fail to retrieve an attribute from outputs of different Terraform configuration during refresh while the attribute is there and can be seen while viewing the .tfstate file manually?
It fails specifically during refresh as part of destroy, not during destroy itself
The values stored in the state for data sources are never actually used by Terraform, they are there as a legacy artifact for compatibility and debugging. Data sources must always be read again during execution in order to be used (-refresh=false
does not affect data sources, that only applies to managed resources). If the remote state data source is reading new data during the destroy operation without the connect_log_analytics_workspace
attribute, that is what will be evaluated at that time.
Just a quick follow up, specifically the clarification that terraform destroy -var environment=preprod
does work while terraform destroy
on its own does not suggests that the specific value for that variable affects which data is being loaded during the operation and would explain why the outputs from the remote_data object are different.
Sorry for my ignorance, but I am not following your ideas. My remote state does have the attribute. If you are saying data sources are always read and latest information is pulled from there, then my reference data.terraform_remote_state.data.outputs.connect_log_analytics_workspace.resource_id
should get resolved and I should not see an error, shouldn't it?
So my data source is pulled like that:
data "terraform_remote_state" "data" { backend = "azurerm"
config = { subscription_id = "modified" resource_group_name = "modiifed" storage_account_name = "modified" container_name = "modified" key = "${var.environment}.tfstate" } }
The environment defaults to "dev" and I am currently working with "preprod" environment. Maybe while it does not actually use that value during refresh or destroy, it still evaluates it and fails because the dev.tfstate does not indeed have it, only preprod.tfstate has. Think that's the issue?
I think that's what the Liam said in his follow-up. Isn't it a bit confusing though? It should perform the destroy successfully based on the state file only, but it fails and I now must pass a variable to destroy so it can pass the evaluation of that data.terraform_remote_state variable. Or my environments just have to be identical
Thanks for your help, the issue turned out to be pretty obvious
I think what might be missing from the picture you have is that variable values are not stored in state files. They need to be specified for every operation, including destroy and refresh.
So in the failing case, that data source is being loaded with key = dev.tfstate
which doesn't contain the required outputs. Then anything that references that data source tries to read the outputs and finds the output they expected doesn't exist, and we see the error.
In the passing case, the data source is loaded with key = preprod.tfstate
and contains all the necessary outputs, and we see things working.
Yep, that's all making sense now, thank you
Terraform Version
Terraform Configuration Files
Debug Output
╷ │ Error: Unsupported attribute │ │ on main.tf line 231, in module "app-service-windows": │ 231: workspace_id = data.terraform_remote_state.data.outputs.connect_log_analytics_workspace.id │ ├──────────────── │ │ data.terraform_remote_state.data.outputs is object with 6 attributes │ │ This object does not have an attribute named "connect_log_analytics_workspace".
The output from 'data':
data "terraform_remote_state" "data" { backend = "azurerm" config = { container_name = "data" key = "modified" resource_group_name = "modified" storage_account_name = "modified" subscription_id = "modified" } outputs = { connect_log_analytics_workspace = { id = "modified" } connect_servicebus = { connection_string_reference = "modified" resource_id = "modified" } connect_sql_server = { connection_string_reference = "modified" resource_id = "modified" } .... other
Expected Behavior
The resource previously provisioned with terraform apply with help of reference to outputs of remote 'data' state should get destroyed as well.
Actual Behavior
On destroy, Terraform does not see the attribute in outputs of terraform_remote_state.data while it's clearly there (confirmed with
terraform output
in data stack and provided above). I've checked the state file itself and I can clearly see an attribute thereSteps to Reproduce
terraform init -backend-config="key=modified.tfstate" terraform apply -var environment=preprod terraform desotry
Additional Context
No response
References
No response