Open penicaudm opened 2 weeks ago
Thanks for creating this issue @penicaudm!
We have seen this issue reported by our users before. Could you check whether this comment helps to find the root cause? It seems like this issue can be caused by different factors.
@penicaudm could you also verify whether a direct API call from the same host works?
We're using az devops pipelines running on a virtual machine scale set. Our VMs running pipelines usually run for the day and get created at night. I just cleaned up our entire pool of VMs to create fresh ones and now the pipeline is working ok again (at least on the plan command). So the only thing that really changed is the terraform installation
I'm trying to figure out if anything has changed but its the same terraform version, same provider, same tasks,etc.
So we got the error again. I reran the pipeline and it worked, pretty hard to troubleshoot that..
I'm suspecting the go issue you reported previously with these topics:
Is at play. So it could be in the fmt package or in Terraform directly.. I'm not good enough at go to provide an educated explanation.
Here is a sample of the formatting issue if needed:
provider.terraform-provider-confluent_2.1.0: Error reading Kafka ACLs "cluster/TOPIC#global.it4it.payload-message.payload-store-error-europe.v1#LITERAL#User:pool-xlg5#*#READ#ALLOW"
this resulted in the following URL:
https://cluster-g0yq0p.westeurope.azure.glb.confluent.cloud:443/kafka/v3/clusters/cluster/acls?host=%!A(MISSING)&operation=READ&pattern_type=LITERAL&permission=ALLOW&principal=User%!A(MISSING)pool-xlg5&resource_name=global.it4it.payload-message.payload-store-error-europe.v1&resource_type=TOPIC
So I'm not sure but perhaps some formatting or testing could be needed around this kind of function:
For the last few days we've been getting this issue on resources such as kafka ACL, topic or identity pools:
I'm wondering if the "User%!A(MISSING)" in the URL is causing an issue with the API but I can't be certain
This is only happening for one of our 12 clusters, which makes it even weirder.
So far I tried: reverting the provider to 1.83, we got the exact same error.