Closed nitrocode closed 2 weeks ago
Voting for Prioritization
Volunteering to Work on This Issue
Thank you for reporting this issue! For maintainers to dig into issues it is required that all issues include the entirety of TF_LOG=DEBUG
output to be provided. The only parts that should be redacted are your user credentials in the X-Auth-Key
, X-Auth-Email
and Authorization
HTTP headers. Details such as zone or account identifiers are not considered sensitive but can be redacted if you are very cautious. This log file provides additional context from Terraform, the provider and the Cloudflare API that helps in debugging issues. Without it, maintainers are very limited in what they can do and may hamper diagnosis efforts.
This issue has been marked with triage/needs-information
and is unlikely to receive maintainer attention until the log file is provided making this a complete bug report.
value
has not been removed from the schema. only a deprecation notice is being issued at the moment.
We do not see a deprecation notice in our plan.
Instead, we see 100s of our cloudflare records trying to re-add the value
argument. This doesn't seem to be API related since downgrading our cloudflare provider allows us to see No changes
as expected.
This is likely a provider bug and gives us far less confidence in keeping a ~> 4
provider pin. For now we have downgraded to =4.38.0
until we're ready to upgrade.
I know open source can be difficult, grueling, and thankless. While I appreciate all of your work (thank you!) and others who contribute to this repo, closing this issue with a terse response and without a conversation, at the very least, is not the most helpful approach in community engagement. Please at least acknowledge the issue and if this is not a provider bug, perhaps offer a path forward for myself and others who may be in the same predicament.
the issue is closed as there isn't anything actionable for maintainers. it doesn't stop discussion or questions of there are still ones outstanding. if an issue is discovered, the issue can always be re-opened and actions assigned.
as for your specific issue, the reproduction case is based on HCL dynamics which we call out in the issue template to avoid as not something we are able to reproduce/debug due to potentially having it's own logic bugs so i would start with a reproduction case that isn't using the module and go from there.
worth noting as well, you haven't provided required debug logs which really hampers what anyone can do to help you here. when issues are raised lacking context, it's the equivalent looking for a light switch, in someone else's house in the pitch black of night. any discovery done is based on assumptions and best guesses which wastes a lot of time which is not a good use of anyone's time.
fwiw, we have 4.41.0 running internally with value
in the DNS record resource (and showing the deprecation) without issue so there is something else to factor in here.
If you close the issue, most people will take that as a sign to no longer continue the discussing, even if technically we can have a discussion as the issue is not locked. Closing an issue is akin to someone coming to your house to let you know that a part of your home has a hole in it, and you say "okay but I need more information" and then immediately close the door with your ear to the door to continue the chat. :)
Technically I can still yell from outside so you can hear me but most people would probably just walk away. That's why I referenced community engagement. As trivial as it is, if you want to continue to discuss the issue, it's best to keep the issue open or at the very least, close it and provide next steps without prompt.
I get your point so to continue the discussion, here is a far smaller hcl without a module to reproduce the issue with debug logs and necessary redactions.
Please let me know if more information is needed.
if you've taken a closed issue that way, i'm sorry but it's your perception. this project along with many others like Homebrew don't operate that way.
as for your reproduction, are you only seeing the issue when importing first? or is this only an issue when creating them directly (actually running apply, not plan)
I see this value/content drift issue for our existing root dirs without importing a record.
The reproduction case required the import to make it small enough to share.
Thank you for looking into this
i'm afraid i'm unable to reproduce this one (without the import which i'm still unsure why it's needed).
using your configuration and the following steps
$ terraform init -upgrade
Initializing the backend...
Initializing provider plugins...
- Finding cloudflare/cloudflare versions matching "4.38.0"...
- Installing cloudflare/cloudflare v4.38.0...
- Installed cloudflare/cloudflare v4.38.0 (self-signed, key ID C76001609EE3B136)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
$ terraform apply -auto-approve
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# cloudflare_record.default will be created
+ resource "cloudflare_record" "default" {
+ allow_overwrite = false
+ comment = "snip"
+ created_on = (known after apply)
+ hostname = (known after apply)
+ id = (known after apply)
+ metadata = (known after apply)
+ modified_on = (known after apply)
+ name = "meetup"
+ proxiable = (known after apply)
+ proxied = true
+ tags = [
+ "terraform",
]
+ ttl = (known after apply)
+ type = "CNAME"
+ value = "example.com"
+ zone_id = "0da42c8d2132a9ddaf714f9e7c920711"
}
Plan: 1 to add, 0 to change, 0 to destroy.
cloudflare_record.default: Creating...
cloudflare_record.default: Creation complete after 0s [id=a5d305f65b3c1f3e72d138b9c109fbda]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
$ terraform apply -auto-approve
cloudflare_record.default: Refreshing state... [id=a5d305f65b3c1f3e72d138b9c109fbda]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
$ sd -s 'version = "4.38.0"' 'version = "4.41.0"' terraform.tf
$ terraform init -upgrade
Initializing the backend...
Initializing provider plugins...
- Finding cloudflare/cloudflare versions matching "4.41.0"...
- Installing cloudflare/cloudflare v4.41.0...
- Installed cloudflare/cloudflare v4.41.0 (self-signed, key ID C76001609EE3B136)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
$ terraform apply -auto-approve
cloudflare_record.default: Refreshing state... [id=a5d305f65b3c1f3e72d138b9c109fbda]
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
╷
│ Warning: Argument is deprecated
│
│ with cloudflare_record.default,
│ on terraform.tf line 601, in resource "cloudflare_record" "default":
│ 601: value = "example.com"
│
│ `value` is deprecated in favour of `content` and will be removed in the next major release.
│
│ (and one more similar warning elsewhere)
╵
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
what i would recommend is instead of terraform plan
, run terraform apply
which may correct any drift in the state file. it's a noop given value
and content
point to the same API value anyway. 4.39.0 did have a bug and if you've attempted to use it in the past, perhaps that is playing into your repro here.
Could the documentation for the cloudflare_record
resource be updated and also include some examples please.
It says:
and
snipped the output of the entire ``data``` block options.
It isn't obvious what is used/needed in the data
block.
It used to be:
data {
algorithm = ""
altitude = ""
digest_type = ""
key_tag =""
...
...
}
But how does it work now?
It says we can now only have inside the data
block
data
(doesn't mention this in the nested list, is it a map itself inside the data
block)
content
(string)
value
(string)
The documentation doesn't make sense now.
I'm affected by this issue too. Here's how to repeat this issue. I'm using TF 1.2.9 for whatever reason
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = "4.38.0"
}
}
}
locals {
zone_name = "example.com"
}
provider "cloudflare" {}
data "cloudflare_zone" "this" {
name = local.zone_name
}
resource "cloudflare_record" "this" {
name = "mytest.${local.zone_name}"
priority = 5
proxied = false
ttl = 1
type = "TXT"
value = "this-is-a-test"
zone_id = data.cloudflare_zone.this.id
}
terraform init
terraform apply -auto-approve
4.41.0
terraform init -upgrade
terraform plan
Plan results
# cloudflare_record.this will be updated in-place
~ resource "cloudflare_record" "this" {
id = "6553e8exxxxxxc4f2f02ccfa22fd0"
name = "mytest.example.com"
tags = []
+ value = "this-is-a-test"
# (10 unchanged attributes hidden)
}
Observations:
schema_version
increment, additional content
parametervalue
needs to be set I can't detect any changes are actually made. So it looks like a "safe" upgrade if you have already accounted for any driftWorkaround seems to be
~> 4.0
)So people will need to pin at 4.40, tf apply, before unpinning and tf applying again. Or blindly upgrade and fingers cross there's no drift or other changes to check for.
Confirmation
Terraform and Cloudflare provider version
terraform 1.9.5 provider 4.41.0
Affected resource(s)
cloudflare_record
Terraform configuration files
Link to debug output
N/A
Panic output
N/A
Expected output
No changes
Also I would have expected this deprecation to be in a major version i.e. 5.x instead of a minor version 4.39.0.
Actual output
Steps to reproduce
Additional factoids
Our workaround is to pin the provider until we're ready to migrate
References
No response