Azure / acs-engine

WE HAVE MOVED: Please join us at Azure/aks-engine!
https://github.com/Azure/aks-engine
MIT License
1.03k stars 560 forks source link

Defined PodCIDR is not respected in all agents node #3326

Closed gustajz closed 5 years ago

gustajz commented 6 years ago

Is this a request for help?: YES

Is this an ISSUE or FEATURE REQUEST? (choose one): ISSUE

What version of acs-engine?: v0.18.8

Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm) Kubernetes v1.10.3

What happened: PodCIDR defined in template are not respected in some agents.

JSON template

"orchestratorProfile": {
  "orchestratorType": "Kubernetes",
  "orchestratorVersion": "1.10.3",      
  "kubernetesConfig": {
    "networkPolicy": "cilium",
    "dockerEngineVersion": "17.05.*",
    "dockerBridgeSubnet": "172.25.56.1/24",
    "serviceCidr": "172.25.58.0/23",
    "dnsServiceIP": "172.25.58.10",        
    "clusterSubnet": "172.25.60.0/22",
    "etcdDiskSizeGB": "64",
    "kubeletConfig": {
      "--max-pods": "62",
      "--non-masquerade-cidr": "172.25.56.0/21"
    },
    "controllerManagerConfig": {
      "--node-cidr-mask-size": "26"
    }        
  }
}
$ kubectl describe node k8s-agentpool-1234567-2
Name:               k8s-agentpool-1234567-2
Roles:              agent
Labels:             agentpool=agentpool
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=Standard_DS3_v2
                    beta.kubernetes.io/os=linux
                    failure-domain.beta.kubernetes.io/region=eastus2
                    failure-domain.beta.kubernetes.io/zone=0
                    kubernetes.azure.com/cluster=RG-XXXXXXXX
                    kubernetes.io/hostname=k8s-agentpool-1234567-2
                    kubernetes.io/role=agent
                    storageprofile=managed
                    storagetier=Premium_LRS
Annotations:        io.cilium.network.ipv4-health-ip=172.25.60.6
                    io.cilium.network.ipv4-pod-cidr=172.25.60.0/26
                    io.cilium.network.ipv6-health-ip=f00d::ac19:3c00:0:74ca
                    io.cilium.network.ipv6-pod-cidr=f00d::ac19:3c00:0:0/96
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
...
PodCIDR:                     172.25.60.0/26
...
$ kubectl get po -o wide --all-namespaces | grep k8s-agentpool-1234567-2
default           lp-nginx-ingress-controller-d5865f5f4-m9b5h         1/1       Running             0          18h       10.6.191.0      k8s-agentpool-1234567-2
default           lp-nginx-ingress-default-backend-67bbfb4d77-cgscf   1/1       Running             0          18h       10.6.114.197    k8s-agentpool-1234567-2
kube-system       cilium-bgq7n                                        1/1       Running             1          1d        172.16.25.6     k8s-agentpool-1234567-2
kube-system       kube-proxy-852wc                                    1/1       Running             1          1d        172.16.25.6     k8s-agentpool-1234567-2
kube-system       kubernetes-dashboard-65c8bbc84b-qhgwc               1/1       Running             1          1d        10.6.220.6      k8s-agentpool-1234567-2

On another nodes works correctly, see below.

$ kubectl get po -o wide --all-namespaces | grep k8s-agentpool-1234567-3
default           lp-nginx-ingress-controller-d5865f5f4-2b89q         1/1       Running             0          18h       172.25.61.126   k8s-agentpool-1234567-3
kube-system       cilium-fq8wl                                        1/1       Running             1          1d        172.16.25.7     k8s-agentpool-1234567-3
kube-system       kube-dns-v20-59b4f7dc55-94z9b                       3/3       Running             3          1d        172.25.61.96    k8s-agentpool-1234567-3
kube-system       kube-proxy-jwk6k                                    1/1       Running             1          1d        172.16.25.7     k8s-agentpool-1234567-3

What you expected to happen: Pod receive IP from defined CIDR.

How to reproduce it (as minimally and precisely as possible): Create a cluster with 3 masters and 4 nodes on a CustomVNET overwriting all default network addresses.

Anything else we need to know: We have a VPN between Azure and the company network. Whose network definition is 10.0.0.0/8, forcing in this way to modify all the network definition of Kubernetes.

stale[bot] commented 5 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contribution. Note that acs-engine is deprecated--see https://github.com/Azure/aks-engine instead.