Open simond-b2 opened 1 year ago
@simond-b2 thanks for opening the issue! this error is usually due to the value at apply phase is different from the value from the plan file. as the problem comes from timeouts
, is there an override value for timeouts after the plan file is generated? and for further troubleshooting, I'd suggest turn on the debug log, it shall have more details of the issue.
Thanks for the feedback, so there is no override value after the plan is generated and we thought that we had a work-around for this by applying default timeout values explicitly like so:
resource "azurerm_dns_a_record" "compose-apex" {
zone_name = azurerm_dns_zone.compose.name
resource_group_name = local.rg_name
name = "@"
ttl = 600
records = [data.azurerm_public_ip.compose-load-balancer.ip_address]
tags = local.tags
/* The azurerm provider is throwing an error post Apply stage. Defining these defaults
* should stop the provider from erroneously throwing an error due to timeout differences.
*/
timeouts {
create = "30m"
update = "30m"
read = "5m"
delete = "30m"
}
}
This has worked without issue over 11 days on multiple deploys to multiple environments. Today however we see the same error message but this time it is inverted; previously it was '.timeouts: was absent, but now present' now it is '.timeouts: was present, but now absent'
│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.itron.azurerm_dns_a_record.compose-apex
│ to include new values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/azurerm" produced an invalid new value for
│ .timeouts: was present, but now absent.
│
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.
Note that this error message does seem to be a lot more transient than the previous condition was as that was generated consistently. We have also noticed that the plan stage is actually scheduling the timeouts be added when the settings for the resource have not been changed.
# module.itron.azurerm_dns_a_record.compose-apex will be updated in-place
~ resource "azurerm_dns_a_record" "compose-apex" {
id = "/subscriptions/****/resourceGroups/<RG NAME>/providers/Microsoft.Network/dnsZones/<DNSZONE>/A/@"
name = "@"
~ records = [
- "<IP>",
] -> (known after apply)
tags = {
"billing" = "devtest"
"bu" = "<BU>"
"cust_name" = "<CUSTOMER>"
"env_name" = "<ENVIRONMENT>"
}
# (4 unchanged attributes hidden)
- timeouts {}
}
@simond-b2 while this might be an issue within the Terraform itself https://github.com/hashicorp/terraform-provider-aws/issues/28191#issuecomment-1634933084 (though unclear yet) While we doing further investigation, I'd suggest try with the latest Terraform version and see if it could resolve the issue
@myc2h6o fyi, we have updated to v1.5 of terraform and we continue to see random failures although the warning output has now changed. We've been monitoring our deploys over the last few cycles and the warning log seems to be forming a pattern of being triggered on every other deployment. I don't know if this will help or not but this is the pattern that has emerged, these deployments have a 3 week gap between them:
TF v1.3.9 azurerm v3.68.0 : No warning log, the deployment is clean TF v1.3.9 azurerm v3.71.0 : The below warning log was tripped TF v1.5.7 azurerm v3.73.0 : No warning log, the deployment is clean TF v1.5.7 azurerm v3.75.0 : The below warning log was tripped
│ Error: Provider produced inconsistent final plan │ When expanding the plan for module.itron.azurerm_dns_a_record.compose-apex │ to include new values learned so far during apply, provider │ "registry.terraform.io/hashicorp/azurerm" produced an invalid new value for │ .timeouts.create: was cty.StringVal("30m"), but now null. │ │ This is a bug in the provider, which should be reported in the provider's │ own issue tracker.
│ Error: Provider produced inconsistent final plan │ │ When expanding the plan for module.itron.azurerm_dns_a_record.compose-apex │ to include new values learned so far during apply, provider │ "registry.terraform.io/hashicorp/azurerm" produced an invalid new value for │ .timeouts.delete: was cty.StringVal("30m"), but now null. │ │ This is a bug in the provider, which should be reported in the provider's │ own issue tracker.
│ Error: Provider produced inconsistent final plan │ │ When expanding the plan for module.itron.azurerm_dns_a_record.compose-apex │ to include new values learned so far during apply, provider │ "registry.terraform.io/hashicorp/azurerm" produced an invalid new value for │ .timeouts.read: was cty.StringVal("5m"), but now null. │ │ This is a bug in the provider, which should be reported in the provider's │ own issue tracker.
│ Error: Provider produced inconsistent final plan │ │ When expanding the plan for module.itron.azurerm_dns_a_record.compose-apex │ to include new values learned so far during apply, provider │ "registry.terraform.io/hashicorp/azurerm" produced an invalid new value for │ .timeouts.update: was cty.StringVal("30m"), but now null. │ │ This is a bug in the provider, which should be reported in the provider's │ own issue tracker.
@myc2h6o A slight variation on the same theme
Terraform v1.5.7 hashicorp/azurerm v3.77.0
╷ │ Error: Provider produced inconsistent final plan │ │ When expanding the plan for module.itron.azurerm_dns_a_record.compose-apex │ to include new values learned so far during apply, provider │ "registry.terraform.io/hashicorp/azurerm" produced an invalid new value for │ .timeouts: was present, but now absent. │ │ This is a bug in the provider, which should be reported in the provider's │ own issue tracker. ╵
Is there an existing issue for this?
Community Note
Terraform Version
1.3.9
AzureRM Provider Version
3.65.0
Affected Resource(s)/Data Source(s)
azurerm_dns_a_record
Terraform Configuration Files
Debug Output/Panic Output
Expected Behaviour
The terraform deployment should complete without generating an error condition.
Actual Behaviour
Terraform apply generated an error condition when producing the 'final plan'.
Steps to Reproduce
terraform apply '-input=false' target/terraform.plan
Important Factoids
We have 6 deployments using the same terraform configuration (different data for different dns zones etc) and only one that is generating this error.
References
No response