Open timbuchinger opened 2 years ago
@timbuchinger I've tried to repro this with following config, but both the apply and plan succeed:
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "test" {
name = "acctestRG-220610105421101453"
location = "WestEurope"
}
resource "azurerm_storage_account" "test" {
name = "acctestacczh3hy"
resource_group_name = azurerm_resource_group.test.name
location = azurerm_resource_group.test.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_share" "test" {
name = "testsharezh3hy"
storage_account_name = azurerm_storage_account.test.name
quota = 5
access_tier = "TransactionOptimized"
}
Would you please provide the debug log for us to further investigate? Additionally, if you can construct a minimal config (similar as above) that can trigger the issue in your subscription, that would be really helpful!
I'm also facing similar kind of error on queue. │ Error: reading queue properties for AzureRM Storage Account "xxxxxxxx": queues.Client#GetServiceProperties: Failure sending request: StatusCode=0 -- Original Error: context deadline exceeded │ │ with module.xxxxxDataLakeStorage.azurerm_storage_account.xxxxAccounts["xxxxxxxxxxx"], │ on .terraform/modules/xxxxxDataLakeStorage/main.tf line 48, in resource "azurerm_storage_account" "xxxxAccounts": │ 48: resource "azurerm_storage_account" "xxxxAccounts" {
Private Endpoint : 7 (only queue creating issue and pep subnet already allowed on selected network). Please let me know any solution on the same (azurerm version : 2.99.0)
I've encountered this. I have an existing set of Azure resources created by Terraform, which include AKS, Application Insights and a helm_release chart "ingress_nginx". I am using terraform version 1.3.7
I have decided to maintain the helm chart outside of terraform, so I have removed the helm provider and the helm_release resource block form my terraform file. At the next step, running "terraform plan -refresh-only", it fails with this message:
╷
│ Error: Kubernetes cluster unreachable: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
│
│
╵
╷
│ Error: making Read request on AzureRM Application Insights Billing Feature 'ai-aksrefapp-dev': insights.ComponentCurrentBillingFeaturesClient#Get: Failure sending request: StatusCode=0 -- Original Error: context deadline exceeded
│
│ with azurerm_application_insights.ai,
│ on akscluster.tf line 229, in resource "azurerm_application_insights" "ai":
│ 229: resource "azurerm_application_insights" "ai" {
│
╵
##[error]Error: Terraform Plan failed with exit code: 1
The application insights resource block that is like this:
resource "azurerm_application_insights" "ai" {
name = "ai-aksrefapp-dev"
location = azurerm_resource_group.apprg.location
resource_group_name = azurerm_resource_group.apprg.name
workspace_id = azurerm_log_analytics_workspace.law.id
application_type = "web"
}
The providers look like this:
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.40.0"
}
}
backend "azurerm" {
storage_account_name = "__terraformstorageaccount__"
container_name = "__terraformstoragecontainer__"
key = "terraform.tfstate"
access_key ="__storagekey__"
}
}
provider "azurerm" {
features {}
}
============================================= I tried runing it again, and this time it did not output the "StatusCode=0" error, it gave only the "invalid configuration" error. Weird.
But, I tried something. I uncommented the Helm Provider block, but left the helm_release block commented. Then, it completed successfully.
Relating to #20257
Currently facing this issue as well.
I was able to create a storage account with 5 fileshares, after fixing a network issue where I had a minor typo in the privatelink module. But now when I try to rerun the terraform with an edit or adding a new fileshare or even destroying it, I get:
module.generic_service_storage.azurerm_private_endpoint.generic_private_endpoint_file[0]: Refreshing state... [id=/subscriptions/###/resourceGroups/###-fileshares-rg-01/providers/Microsoft.Network/privateEndpoints/##sa02-file-endpoint]
About 6 minutes stuck in that state the following error:
Error: shares.Client#GetProperties: Failure sending request: StatusCode=0 -- Original Error: context deadline exceeded
We've tried completely removing ALL private network related features on the SA and still same error.
Hi any resolution on this?
Context deadline issues with private endpoint basically make terraform unusable after the first deployment of services, all subsequent plan commands fail afterwards
I switched to the microsoft/azapi provider as a alternate to azurerm and it works while private endpoints exist. The code is not elegant as in azurerm due to json encode stuff but it works for now until we get a fix for this issue.
I'm also facing the similar issue. Can anyone provide the solution for this?
I'm also facing the similar issue. Can anyone provide the solution for this?
Same here - Trying to create a storage account and running into this problem. Only started today.
Is there an existing issue for this?
Community Note
Terraform Version
1.2.6
AzureRM Provider Version
3.16.0
Affected Resource(s)/Data Source(s)
azurerm_storage_share
Terraform Configuration Files
Debug Output/Panic Output
Expected Behaviour
The Terraform state refresh should be successful.
Actual Behaviour
The state refresh times out after 5 minutes with the message
error: shares.Client#GetProperties: Failure sending request: StatusCode=0 -- Original Error: context deadline exceeded
Runningterraform plan -refresh=false
runs successfully. I am having this issue with multiple subscriptions, that up until recently, ran without issue. These are older file shares that are just now failing to refresh.Steps to Reproduce
terraform plan
orterraform apply
Important Factoids
No response
References
10015 has the same error, but it is related to private endpoints. We are not using private endpoints on the storage accounts.
2977 is similar, but it is related to storage accounts with network rules. This has been reproduced without any network rules on the storage account.