Closed smartpcr closed 3 years ago
I also got the same error in EAST US region for the kubernetes version 1.17.7. Then, I ran it again with 1.16.10 and it worked.
Error: waiting for creation of Managed Kubernetes Cluster "AKS-CLUSTEREASTUS" (Resource Group "RG-AKS-CLUSTEREASTUS"): Code="OverlaymgrReconcileError" Message="We are unable to serve this request due to an internal error, Correlation ID: <REDACTED>, Operation ID: <REDACTED>, Timestamp: 2020-07-10T10:10:11Z."
Same error here for 1.17.7.
Action required from @Azure/aks-pm
This is an internal error code, i'd recommend opening a support ticket with the unredacted operation/correlation IDs so support can see the full error code in the backend
@smartpcr and @ohorvath : did you manage to open a support case with us?
Case being worked with Microsoft Support, adding stale label for automatic closure if not other reports are added.
Case being worked with Microsoft Support, adding stale label for automatic closure if not other reports are added.
Case being worked with Microsoft Support, adding stale label for automatic closure if not other reports are added.
Case being worked with Microsoft Support, adding stale label for automatic closure if not other reports are added.
Case being worked with Microsoft Support, adding stale label for automatic closure if not other reports are added.
Apologies if I'm not supposed to chime in at this point, but this is happening to me too in Canada Central region
Terraform v0.13.4
Happening again for us too. Today and yesterday at least 10 clusters failed with this error message. No terraform, mostly in centralus region.
Case being worked with Microsoft Support, adding stale label for automatic closure if not other reports are added.
This issue will now be closed because it hasn't had any activity for 15 days after stale. smartpcr feel free to comment again on the next 7 days to reopen or open a new issue after that time if you still have a question/issue or suggestion.
What happened: I was trying to provision AKS cluster using terraform, cluster was created successfully. Then I deleted the cluster and tried to create cluster again, the following error was returned:
What you expected to happen: the cluster should be created without error
How to reproduce it (as minimally and precisely as possible):
module "provider" { source = "github.com/smartpcr/bedrock/cluster/azure/provider" }
data "azurerm_client_config" "current" {}
module "aks-gitops" { source = "github.com/smartpcr/bedrock/cluster/azure/aks-gitops"
log analytics
log_analytics_resource_group_name = var.log_analytics_resource_group_name log_analytics_resource_group_location = var.log_analytics_resource_group_location log_analytics_name = var.log_analytics_name
aks cluster
subscription_id = var.aks_subscription_id ssh_public_key = var.ssh_public_key aks_resource_group_location = var.aks_resource_group_location aks_resource_group_name = var.aks_resource_group_name service_principal_id = var.service_principal_id service_principal_secret = var.service_principal_secret server_app_id = var.server_app_id server_app_secret = var.server_app_secret client_app_id = var.client_app_id tenant_id = var.tenant_id agent_vm_count = var.agent_vm_count agent_vm_size = var.agent_vm_size cluster_name = var.cluster_name kubernetes_version = var.kubernetes_version dns_prefix = var.dns_prefix service_cidr = var.service_cidr dns_ip = var.dns_ip docker_cidr = var.docker_cidr oms_agent_enabled = var.oms_agent_enabled dashboard_cluster_role = var.dashboard_cluster_role
dev-space
enable_dev_spaces = var.enable_dev_spaces dev_space_name = var.dev_space_name
aks role assignment
aks_owners = var.aks_owners aks_contributors = var.aks_contributors aks_readers = var.aks_readers aks_owner_groups = var.aks_owner_groups aks_contributor_groups = var.aks_contributor_groups aks_reader_groups = var.aks_reader_groups
flux
enable_flux = var.enable_flux flux_recreate = var.flux_recreate kubeconfig_recreate = var.kubeconfig_recreate gc_enabled = var.gc_enabled acr_enabled = var.acr_enabled gitops_ssh_url = var.gitops_ssh_url gitops_ssh_key = var.gitops_ssh_key gitops_path = var.gitops_path gitops_poll_interval = var.gitops_poll_interval gitops_url_branch = var.gitops_url_branch create_helm_operator = var.create_helm_operator create_helm_operator_crds = var.create_helm_operator_crds git_label = var.git_label }
Anything else we need to know?:
Environment:
kubectl version
): 1.16.9