cilium / cilium-cli

CLI to install, manage & troubleshoot Kubernetes clusters running Cilium
https://cilium.io
Apache License 2.0
430 stars 209 forks source link

Cilium service mesh installation fails "context deadline exceeded" on AKS #664

Closed damian-natzka closed 1 month ago

damian-natzka commented 2 years ago

Bug report

Cilium installation fails inside the cluster.

Command issued:

cilium install --version -service-mesh:v1.11.0-beta.1 --config enable-envoy-config=true --kube-proxy-replacement=probe

General Information

resource "azurerm_kubernetes_cluster" "aks" {

  dns_prefix          = "${var.clientName}${var.clusterName}"
  kubernetes_version  = var.kubernetes_version
  location            = var.region
  name                = "${var.clientName}-${var.clusterName}-AKS"
  node_resource_group = "${var.clientName}-${var.clusterName}-aux-rg"
  resource_group_name = data.azurerm_resource_group.resource-group.name

  lifecycle {
    ignore_changes = [
      linux_profile,
      network_profile,
      service_principal,
      addon_profile,
      role_based_access_control,
      dns_prefix,
      windows_profile
    ]
  }

  network_profile {
    network_plugin    = var.kubernetes_network_plugin
    network_policy    = var.kubernetes_network_policy_plugin
    load_balancer_sku = "standard"
  }

  default_node_pool {
    name                = "default"
    node_count          = var.cluster_initial_size
    min_count           = var.cluster_min_size
    max_count           = var.cluster_max_size
    vm_size             = var.VMSize
    enable_auto_scaling = var.cluster_auto_scaling
    os_disk_size_gb     = var.root_disk_size
    os_disk_type        = "Ephemeral"
    vnet_subnet_id      = azurerm_subnet.k8s.id
    type                = "VirtualMachineScaleSets"
    orchestrator_version= var.kubernetes_version
    node_labels = {
      "type.node.kubernetes.io/worker" = "true",
      "cluster.autoscaler/name" = var.clusterName
    }
  }

  identity {
    type = "SystemAssigned"
  }

  role_based_access_control {
    enabled = true
    azure_active_directory {
      managed                = true
      admin_group_object_ids = ["xxxxxxxxxxxxxxxxxxxx"]
      }
  }

  linux_profile {
    admin_username = data.azurerm_key_vault_secret.aks_linux_profile_admin.value
    ssh_key {
      key_data = data.azurerm_key_vault_secret.aks_ssh_public_key.value
    }
  }

Error image

Logs from operator

level=info msg="envoy-config synced" existing-envoy-config="[]" subsys=ingress-controller
level=info msg="Synchronized Azure IPAM information" numInstances=3 numSubnets=0 numVirtualNetworks=0 subsys=azure
level=info msg="Synchronized Azure IPAM information" numInstances=3 numSubnets=0 numVirtualNetworks=0 subsys=azure
level=info msg="Synchronized Azure IPAM information" numInstances=3 numSubnets=0 numVirtualNetworks=0 subsys=azure
level=info msg="Synchronized Azure IPAM information" numInstances=3 numSubnets=0 numVirtualNetworks=0 subsys=azure
level=info msg="Synchronized Azure IPAM information" numInstances=3 numSubnets=0 numVirtualNetworks=0 subsys=azure
level=info msg="Received termination signal. Shutting down" subsys=cilium-operator-azure
Failed to release lock: leases.coordination.k8s.io "cilium-operator-resource-lock" is forbidden: User "system:serviceaccount:kube-system:cilium-operator" cannot update resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
level=error msg="Failed to release lock: leases.coordination.k8s.io \"cilium-operator-resource-lock\" is forbidden: User \"system:serviceaccount:kube-system:cilium-operator\" cannot update resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-system\"" subsys=klog
level=info msg="Leader election lost" operator-id=aks-default-28525625-vmss000002-jBUQARFQJY subsys=cilium-operator-azure

Logs from daemonset

level=info msg="Exiting due to signal" signal=terminated subsys=daemon
level=info msg="Waiting for all endpoints' go routines to be stopped." subsys=daemon
level=info msg="All endpoints' goroutines stopped." subsys=daemon
level=info msg="Waiting for IPs to become available in CRD-backed allocation pool" available=31 helpMessage="Check if cilium-operator pod is running and does not have any warnings or error messages." name=aks-default-28525625-vmss000002 required=8 subsys=ipam

kubectl describe pod coredns

Warning  FailedCreatePodSandBox  30s   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "adc3e0f0e25d76b0f4199580da3a3f6af8ddfc6c1cd7935be597f74ed82e8852": unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory

kubectl describe pod cilium-agent

  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  3m13s                default-scheduler  Successfully assigned kube-system/cilium-kdvpx to aks-nodepool1-32870243-vmss000002
  Normal   Pulling    3m12s                kubelet            Pulling image "quay.io/cilium/cilium-service-mesh:v1.11.0-beta.1"
  Normal   Pulled     3m3s                 kubelet            Successfully pulled image "quay.io/cilium/cilium-service-mesh:v1.11.0-beta.1" in 8.963361122s
  Normal   Created    2m58s                kubelet            Created container ebpf-mount
  Normal   Started    2m58s                kubelet            Started container ebpf-mount
  Normal   Pulled     2m57s                kubelet            Container image "quay.io/cilium/cilium-service-mesh:v1.11.0-beta.1" already present on machine
  Normal   Started    2m57s                kubelet            Started container clean-cilium-state
  Normal   Created    2m57s                kubelet            Created container clean-cilium-state
  Normal   Pulled     2m56s                kubelet            Container image "quay.io/cilium/cilium-service-mesh:v1.11.0-beta.1" already present on machine
  Normal   Created    2m56s                kubelet            Created container cilium-agent
  Normal   Started    2m56s                kubelet            Started container cilium-agent
  Warning  Unhealthy  13s (x6 over 2m43s)  kubelet            Readiness probe failed: Get "http://127.0.0.1:9876/healthz": dial tcp 127.0.0.1:9876: connect: connection refused
  Warning  Unhealthy  13s (x2 over 43s)    kubelet            Liveness probe failed: Get "http://127.0.0.1:9876/healthz": dial tcp 127.0.0.1:9876: connect: connection refused

I have tried a clean AKS too:

az aks create --resource-group SANDBOX-Damian --name cilium01 --load-balancer-sku standard --network-plugin azure --vnet-subnet-id "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/SANDBOX-Damian/providers/Microsoft.Network/virtualNetworks/sandbox-damian-vnet/subnets/default" --service-cidr 10.2.0.0/24 --dns-service-ip 10.2.0.10

Behaviour is the same

damian-natzka commented 2 years ago

I tried to follow also the complete setup instruction with 2 nodepools for AKS (https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#create-the-cluster), result is the same.

kubectl describe pod coredns

 Warning  FailedCreatePodSandBox  21s  kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "41de25989a3fcc8438e6845db91745953c8b08d7a11ca0c2d7c13202f12bbf44": unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory

After rollback nodes end up in 'not ready' state. Not sure if that's relevant: image

with:

Ready            False   Thu, 23 Dec 2021 15:21:11 +0100   Thu, 23 Dec 2021 15:16:10 +0100   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
dosmanak commented 1 year ago

Hi. I hit the same issue with v1.11.11. The operator seems happy, restarts coredns, but the daemonset is not ready. It seems the communication with IPAM is not working properly. 0 subnet is reported

level=info msg="Synchronized Azure IPAM information" numInstances=2 numSubnets=0 numVirtualNetworks=0 subsys=azure

I must add, the installation is without servicemesh.

github-actions[bot] commented 1 month ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

github-actions[bot] commented 1 month ago

This issue has not seen any activity since it was marked stale. Closing.