Our goal was to create an internal kubernetes cluster using our Internal VNET with is having express-route connectivity. Our internal VNET resides in a separate ResourceGroup. So deploying such a custom cluster was not possible using ACS or AKS. So we preferred ACS-Engine to bring our own VNET for deploying an internal facing Kubernetes cluster. The maximum IP block which we can assign to a single kubernetes cluster is a /28 block (another limitation). We need only the master , minions and kubernetes Loadbalancer service use our internal blocks.
We have tried to deploy a cluster using using following ARM template. We use our internal IP blocks for vnetCidr which is used by master and agent nodes and use separate clusterSubnet address. The cluster was created successfully. Initially, the kube-dashboard and DNS pods were failing, it was solved by adding route table to our internal subnet. We were able to deploy apps and use internal kubernetes loadbalancer service for accessing them. But we were unable to access pods created on separate clusterSubnet using kubectl.
We get the following errors while accessing pods:
:~$ kubectl exec -it nginx-31893996-8mxj0 bash
Error from server: error dialing backend: dial tcp: lookup k8s-acusnlpk8s-34362440-0 on 10.165.65.8:53: read udp 10.246.65.55:35783->10.165.65.8:53: i/o timeout to
How to reproduce it ?
I've added some details which can help someone to replicate this problem.
Details :
Ubuntu 16.04.3
Acs-Engine version : v0.12.4
Docker Version : 1.12.6
Kubernetes Version : v1.7.9
:~# cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.160.35.137
nameserver 10.160.35.136
nameserver 10.165.65.8
search reddog.microsoft.com
What happened ?
Our goal was to create an internal kubernetes cluster using our Internal VNET with is having express-route connectivity. Our internal VNET resides in a separate ResourceGroup. So deploying such a custom cluster was not possible using ACS or AKS. So we preferred ACS-Engine to bring our own VNET for deploying an internal facing Kubernetes cluster. The maximum IP block which we can assign to a single kubernetes cluster is a /28 block (another limitation). We need only the master , minions and kubernetes Loadbalancer service use our internal blocks.
We have tried to deploy a cluster using using following ARM template. We use our internal IP blocks for vnetCidr which is used by master and agent nodes and use separate clusterSubnet address. The cluster was created successfully. Initially, the kube-dashboard and DNS pods were failing, it was solved by adding route table to our internal subnet. We were able to deploy apps and use internal kubernetes loadbalancer service for accessing them. But we were unable to access pods created on separate clusterSubnet using kubectl.
We get the following errors while accessing pods:
How to reproduce it ?
I've added some details which can help someone to replicate this problem.
Details :
Details of all containers :
ARM Template used for deploying :
System resolver settings :