kubernetes-sigs / cluster-api-provider-vsphere

Apache License 2.0
372 stars 295 forks source link

Cluster creation fails due to API not being accessible remotely #584

Closed MaxRink closed 5 years ago

MaxRink commented 5 years ago

/kind bug

What steps did you take and what happened: Clusterctl fails to boostrap the management cluster with the following error:

Clusterctl output ```text clusterctl` create cluster \ --bootstrap-type kind \ --bootstrap-flags name=management-cluster \ --cluster ./out/management-cluster/cluster.yaml \ --machines ./out/management-cluster/controlplane.yaml \ --provider-components ./out/management-cluster/provider-components.yaml \ --addon-components ./out/management-cluster/addons.yaml \ --kubeconfig-out ./out/management-cluster/kubeconfig -v 6 I0923 14:29:57.606937 13789 createbootstrapcluster.go:27] Preparing bootstrap cluster I0923 14:29:57.606981 13789 kind.go:70] Running: kind [create cluster --name=management-cluster] I0923 14:30:39.855273 13789 kind.go:73] Ran: kind [create cluster --name=management-cluster] Output: Creating cluster "management-cluster" ... â€ĸ Ensuring node image (kindest/node:v1.15.3) đŸ–ŧ ... ✓ Ensuring node image (kindest/node:v1.15.3) đŸ–ŧ â€ĸ Preparing nodes đŸ“Ļ ... ✓ Preparing nodes đŸ“Ļ â€ĸ Creating kubeadm config 📜 ... ✓ Creating kubeadm config 📜 â€ĸ Starting control-plane 🕹ī¸ ... ✓ Starting control-plane 🕹ī¸ â€ĸ Installing CNI 🔌 ... ✓ Installing CNI 🔌 â€ĸ Installing StorageClass 💾 ... ✓ Installing StorageClass 💾 Cluster creation complete. You can now use the cluster with: export KUBECONFIG="$(kind get kubeconfig-path --name="management-cluster")" kubectl cluster-info I0923 14:30:39.855702 13789 kind.go:70] Running: kind [get kubeconfig-path --name=management-cluster] I0923 14:30:39.882446 13789 kind.go:73] Ran: kind [get kubeconfig-path --name=management-cluster] Output: /root/.kube/kind-config-management-cluster I0923 14:30:39.883763 13789 loader.go:359] Config loaded from file /tmp/488190914 I0923 14:30:39.893922 13789 round_trippers.go:438] GET https://127.0.0.1:44455/api?timeout=32s 200 OK in 9 milliseconds I0923 14:30:39.894772 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis?timeout=32s 200 OK in 0 milliseconds I0923 14:30:39.900494 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/apiregistration.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.900528 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.900547 13789 round_trippers.go:438] GET https://127.0.0.1:44455/api/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.900558 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901018 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/networking.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds I0923 14:30:39.901053 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/policy/v1beta1?timeout=32s 200 OK in 4 milliseconds I0923 14:30:39.901065 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901082 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/apps/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901098 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901281 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/networking.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901337 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901353 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/authorization.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901432 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901437 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/apiextensions.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901484 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/extensions/v1beta1?timeout=32s 200 OK in 6 milliseconds I0923 14:30:39.901505 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/batch/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901541 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901545 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901585 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/autoscaling/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901599 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/apps/v1beta2?timeout=32s 200 OK in 6 milliseconds I0923 14:30:39.901632 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/admissionregistration.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901648 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/batch/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901659 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/authentication.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901670 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/autoscaling/v2beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901739 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/apps/v1?timeout=32s 200 OK in 6 milliseconds I0923 14:30:39.901745 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/storage.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901752 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901811 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/autoscaling/v2beta2?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901819 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901845 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901884 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/coordination.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.901899 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/scheduling.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:39.902567 13789 clusterdeployer.go:83] Applying Cluster API stack to bootstrap cluster I0923 14:30:39.902580 13789 applyclusterapicomponents.go:26] Applying Cluster API Provider Components I0923 14:30:39.902586 13789 clusterclient.go:777] Waiting for kubectl apply... I0923 14:30:40.691472 13789 clusterclient.go:807] Waiting for Cluster resources to be listable... I0923 14:30:40.692529 13789 round_trippers.go:438] GET https://127.0.0.1:44455/api?timeout=32s 200 OK in 0 milliseconds I0923 14:30:40.693185 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis?timeout=32s 200 OK in 0 milliseconds I0923 14:30:40.701168 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/infrastructure.cluster.x-k8s.io/v1alpha2?timeout=32s 200 OK in 6 milliseconds I0923 14:30:40.701221 13789 round_trippers.go:438] GET https://127.0.0.1:44455/api/v1?timeout=32s 200 OK in 6 milliseconds I0923 14:30:40.701352 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/apps/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:40.701462 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/autoscaling/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:40.701480 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/apps/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:40.701565 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:40.701578 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/authorization.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:40.701621 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/apps/v1beta2?timeout=32s 200 OK in 5 milliseconds I0923 14:30:40.701638 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/apiregistration.k8s.io/v1?timeout=32s 200 OK in 6 milliseconds I0923 14:30:40.701689 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/apiregistration.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds I0923 14:30:40.701705 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/extensions/v1beta1?timeout=32s 200 OK in 6 milliseconds I0923 14:30:40.701738 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/rbac.authorization.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:40.701800 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:40.701817 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/networking.k8s.io/v1beta1?timeout=32s 200 OK in 6 milliseconds I0923 14:30:40.701860 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/autoscaling/v2beta1?timeout=32s 200 OK in 6 milliseconds I0923 14:30:40.701468 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/events.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:40.701474 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/authentication.k8s.io/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:40.701899 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/autoscaling/v2beta2?timeout=32s 200 OK in 6 milliseconds I0923 14:30:40.701909 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/authentication.k8s.io/v1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:40.702707 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/admissionregistration.k8s.io/v1beta1?timeout=32s 200 OK in 1 milliseconds I0923 14:30:40.702978 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/storage.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds I0923 14:30:40.703187 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/node.k8s.io/v1beta1?timeout=32s 200 OK in 2 milliseconds I0923 14:30:40.703434 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/bootstrap.cluster.x-k8s.io/v1alpha2?timeout=32s 200 OK in 2 milliseconds I0923 14:30:40.703650 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/networking.k8s.io/v1?timeout=32s 200 OK in 2 milliseconds I0923 14:30:40.703892 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2?timeout=32s 200 OK in 2 milliseconds I0923 14:30:40.704102 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/policy/v1beta1?timeout=32s 200 OK in 3 milliseconds I0923 14:30:40.704336 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/scheduling.k8s.io/v1?timeout=32s 200 OK in 3 milliseconds I0923 14:30:40.704577 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/coordination.k8s.io/v1?timeout=32s 200 OK in 3 milliseconds I0923 14:30:40.704817 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/storage.k8s.io/v1beta1?timeout=32s 200 OK in 3 milliseconds I0923 14:30:40.705020 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds I0923 14:30:40.705257 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/coordination.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds I0923 14:30:40.705476 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/batch/v1?timeout=32s 200 OK in 4 milliseconds I0923 14:30:40.705713 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/scheduling.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds I0923 14:30:40.705920 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/apiextensions.k8s.io/v1beta1?timeout=32s 200 OK in 4 milliseconds I0923 14:30:40.706232 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/batch/v1beta1?timeout=32s 200 OK in 5 milliseconds I0923 14:30:40.739015 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/clusters 200 OK in 29 milliseconds I0923 14:30:40.740062 13789 clusterdeployer.go:88] Provisioning target cluster via bootstrap cluster I0923 14:30:40.740628 13789 loader.go:359] Config loaded from file /tmp/488190914 I0923 14:30:40.746837 13789 round_trippers.go:438] GET https://127.0.0.1:44455/api/v1/namespaces/default 200 OK in 4 milliseconds I0923 14:30:40.747407 13789 applycluster.go:38] Creating Cluster referenced object "infrastructure.cluster.x-k8s.io/v1alpha2, Kind=VSphereCluster" with name "management-cluster" in namespace "default" I0923 14:30:40.761698 13789 round_trippers.go:438] POST https://127.0.0.1:44455/apis/infrastructure.cluster.x-k8s.io/v1alpha2/namespaces/default/vsphereclusters 201 Created in 14 milliseconds I0923 14:30:40.761811 13789 applycluster.go:44] Creating cluster object management-cluster in namespace "default" I0923 14:30:40.765106 13789 round_trippers.go:438] POST https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/clusters 201 Created in 2 milliseconds I0923 14:30:40.765767 13789 clusterdeployer.go:101] Creating control plane machine "management-cluster-controlplane-0" in namespace "default" I0923 14:30:40.766257 13789 loader.go:359] Config loaded from file /tmp/488190914 I0923 14:30:40.770507 13789 round_trippers.go:438] GET https://127.0.0.1:44455/api/v1/namespaces/default 200 OK in 3 milliseconds I0923 14:30:40.770598 13789 applymachines.go:38] Creating Machine referenced object "infrastructure.cluster.x-k8s.io/v1alpha2, Kind=VSphereMachine" with name "management-cluster-controlplane-0" in namespace "default" I0923 14:30:40.782467 13789 round_trippers.go:438] POST https://127.0.0.1:44455/apis/infrastructure.cluster.x-k8s.io/v1alpha2/namespaces/default/vspheremachines 201 Created in 11 milliseconds I0923 14:30:40.782553 13789 applymachines.go:38] Creating Machine referenced object "bootstrap.cluster.x-k8s.io/v1alpha2, Kind=KubeadmConfig" with name "management-cluster-controlplane-0" in namespace "default" I0923 14:30:40.823488 13789 round_trippers.go:438] POST https://127.0.0.1:44455/apis/bootstrap.cluster.x-k8s.io/v1alpha2/namespaces/default/kubeadmconfigs 201 Created in 40 milliseconds I0923 14:30:40.823689 13789 applymachines.go:44] Creating machines in namespace "default" I0923 14:30:40.876933 13789 round_trippers.go:438] POST https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines 201 Created in 52 milliseconds I0923 14:30:40.878029 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:30:40.880887 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:30:50.881119 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:30:50.882759 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 1 milliseconds I0923 14:31:00.881104 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:31:00.883396 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:31:10.881137 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:31:10.883097 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 1 milliseconds I0923 14:31:20.881104 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:31:20.883022 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 1 milliseconds I0923 14:31:30.881124 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:31:30.882873 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 1 milliseconds I0923 14:31:40.881104 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:31:40.883254 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:31:50.881109 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:31:50.883442 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:32:00.881102 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:32:00.884128 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:32:10.881137 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:32:10.883307 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:32:20.881114 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:32:20.883590 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:32:30.881118 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:32:30.883847 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:32:40.881340 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:32:40.884559 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 3 milliseconds I0923 14:32:50.881114 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:32:50.889281 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 8 milliseconds I0923 14:33:00.881109 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:33:00.883858 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:33:10.881110 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:33:10.883730 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:33:20.881109 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:33:20.884568 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 3 milliseconds I0923 14:33:30.881112 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:33:30.883947 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:33:40.881109 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:33:40.883986 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:33:50.881105 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:33:50.883824 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:34:00.881113 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:34:00.883761 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:34:10.881116 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:34:10.883614 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:34:20.881112 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:34:20.883748 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:34:30.881112 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:34:30.883625 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:34:40.881115 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:34:40.883877 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:34:50.881104 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:34:50.883583 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:35:00.881112 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:35:00.883734 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:35:10.881118 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:35:10.883693 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:35:20.881113 13789 clusterclient.go:828] Waiting for Machine management-cluster-controlplane-0 to become ready... I0923 14:35:20.883751 13789 round_trippers.go:438] GET https://127.0.0.1:44455/apis/cluster.x-k8s.io/v1alpha2/namespaces/default/machines/management-cluster-controlplane-0 200 OK in 2 milliseconds I0923 14:35:20.884355 13789 clusterdeployer.go:110] Creating target cluster I0923 14:35:20.884367 13789 getkubeconfig.go:40] Getting target cluster kubeconfig. I0923 14:35:20.884404 13789 getkubeconfig.go:65] Waiting for kubeconfig from Secret "management-cluster-kubeconfig" in namespace "default" to become available... I0923 14:35:20.885949 13789 round_trippers.go:438] GET https://127.0.0.1:44455/api/v1/namespaces/default/secrets/management-cluster-kubeconfig 200 OK in 1 milliseconds I0923 14:35:20.887713 13789 loader.go:359] Config loaded from file /tmp/851628089 I0923 14:35:52.888301 13789 round_trippers.go:438] GET https://10.92.159.240:6443/api?timeout=32s in 32000 milliseconds I0923 14:35:52.888377 13789 clusterclient.go:237] Waiting to acquire client... I0923 14:35:52.888541 13789 createbootstrapcluster.go:36] Cleaning up bootstrap cluster. I0923 14:35:52.888558 13789 kind.go:70] Running: kind [delete cluster --name=management-cluster] I0923 14:35:53.767089 13789 kind.go:73] Ran: kind [delete cluster --name=management-cluster] Output: Deleting cluster "management-cluster" ... $KUBECONFIG is still set to use /root/.kube/kind-config-management-cluster even though that file has been deleted, remember to unset it F0923 14:35:53.767133 13789 create_cluster.go:59] unable to create target cluster client: failed to acquire new client: Get https://10.92.159.240:6443/api?timeout=32s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) ```

The node created is accessible via ssh after clusterctl failed. If i try to access the API i get this error:

kubectl get pods --kubeconfig=/home/a92615428/out/management-cluster/kubeconfig
Unable to connect to the server: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

If i ssh onto the node and use the config file located under /etc/kubernetes/admin.conf i can access the api locally and also apply a cni.

kubectl get nodes
NAME                                STATUS   ROLES    AGE   VERSION
management-cluster-controlplane-0   Ready    master   79m   v1.15.3

The Node is still not accessible from the outside.

The Node and the server used for Bootstrapping are within the same L2-Network and no Firewall on the boostraping server is configured speak to the node What did you expect to happen: Clusterctl suceeds and the API is publically accessible

Further Logs ```text cat /var/log/cloud-init.log 2019-09-23 13:43:17,441 - util.py[DEBUG]: Cloud-init v. 18.3-52-gc5f78957-1~bddeb~18.04.1 running 'init-local' at Mon, 23 Sep 2019 13:43:17 +0000. Up 7.43 seconds. 2019-09-23 13:43:17,441 - main.py[DEBUG]: No kernel command line url found. 2019-09-23 13:43:17,441 - main.py[DEBUG]: Closing stdin. 2019-09-23 13:43:17,443 - util.py[DEBUG]: Writing to /var/log/cloud-init.log - ab: [644] 0 bytes 2019-09-23 13:43:17,444 - util.py[DEBUG]: Changing the ownership of /var/log/cloud-init.log to 102:4 2019-09-23 13:43:17,444 - util.py[DEBUG]: Attempting to remove /var/lib/cloud/instance/boot-finished 2019-09-23 13:43:17,444 - util.py[DEBUG]: Attempting to remove /var/lib/cloud/data/no-net 2019-09-23 13:43:17,445 - handlers.py[DEBUG]: start: init-local/check-cache: attempting to read from cache [check] 2019-09-23 13:43:17,445 - util.py[DEBUG]: Reading from /var/lib/cloud/instance/obj.pkl (quiet=False) 2019-09-23 13:43:17,445 - stages.py[DEBUG]: no cache found 2019-09-23 13:43:17,445 - handlers.py[DEBUG]: finish: init-local/check-cache: SUCCESS: no cache found 2019-09-23 13:43:17,445 - util.py[DEBUG]: Attempting to remove /var/lib/cloud/instance 2019-09-23 13:43:17,455 - stages.py[DEBUG]: Using distro class 2019-09-23 13:43:17,455 - __init__.py[DEBUG]: Looking for data source in: ['VMwareGuestInfo', 'None'], via packages ['', 'cloudinit.sources'] that matches dependencies ['FILESYSTEM'] 2019-09-23 13:43:17,477 - __init__.py[DEBUG]: Searching for local data source in: ['DataSourceVMwareGuestInfo'] 2019-09-23 13:43:17,477 - handlers.py[DEBUG]: start: init-local/search-VMwareGuestInfo: searching for local data from DataSourceVMwareGuestInfo 2019-09-23 13:43:17,477 - __init__.py[DEBUG]: Seeing if we can get any data from 2019-09-23 13:43:17,477 - __init__.py[DEBUG]: Update datasource metadata and network config due to events: New instance first boot 2019-09-23 13:43:17,477 - DataSourceVMwareGuestInfo.py[DEBUG]: Getting guestinfo value for key metadata 2019-09-23 13:43:17,477 - util.py[DEBUG]: Running command ['/usr/bin/vmtoolsd', '--cmd', 'info-get guestinfo.metadata'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:17,483 - DataSourceVMwareGuestInfo.py[DEBUG]: Getting guestinfo value for key metadata.encoding 2019-09-23 13:43:17,483 - util.py[DEBUG]: Running command ['/usr/bin/vmtoolsd', '--cmd', 'info-get guestinfo.metadata.encoding'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:19,520 - DataSourceVMwareGuestInfo.py[DEBUG]: Getting encoded data for key=guestinfo.metadata, enc=base64 2019-09-23 13:43:19,520 - DataSourceVMwareGuestInfo.py[DEBUG]: Decoding base64 format guestinfo.metadata 2019-09-23 13:43:19,523 - DataSourceVMwareGuestInfo.py[DEBUG]: loaded metadata {'instance-id': 'management-cluster-controlplane-0', 'local-hostname': 'management-cluster-controlplane-0', 'network': {'version': 2, 'ethernets': {'id0': {'match': {'macaddress': '00:50:56:97:70:bb'}, 'wakeonlan': True, 'dhcp4': False, 'dhcp6': False, 'addresses': ['10.92.159.240/22'], 'gateway4': '10.92.156.1', 'nameservers': {'addresses': ['10.145.139.41', '10.145.139.42']}}}}} 2019-09-23 13:43:19,523 - DataSourceVMwareGuestInfo.py[DEBUG]: network data found 2019-09-23 13:43:19,523 - DataSourceVMwareGuestInfo.py[DEBUG]: network data copied to 'config' key 2019-09-23 13:43:19,523 - DataSourceVMwareGuestInfo.py[DEBUG]: network data {'config': {'version': 2, 'ethernets': {'id0': {'match': {'macaddress': '00:50:56:97:70:bb'}, 'wakeonlan': True, 'dhcp4': False, 'dhcp6': False, 'addresses': ['10.92.159.240/22'], 'gateway4': '10.92.156.1', 'nameservers': {'addresses': ['10.145.139.41', '10.145.139.42']}}}}} 2019-09-23 13:43:19,523 - DataSourceVMwareGuestInfo.py[DEBUG]: Getting guestinfo value for key userdata 2019-09-23 13:43:19,523 - util.py[DEBUG]: Running command ['/usr/bin/vmtoolsd', '--cmd', 'info-get guestinfo.userdata'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:19,528 - DataSourceVMwareGuestInfo.py[DEBUG]: Getting guestinfo value for key userdata.encoding 2019-09-23 13:43:19,528 - util.py[DEBUG]: Running command ['/usr/bin/vmtoolsd', '--cmd', 'info-get guestinfo.userdata.encoding'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:21,536 - DataSourceVMwareGuestInfo.py[DEBUG]: Getting encoded data for key=guestinfo.userdata, enc=base64 2019-09-23 13:43:21,536 - DataSourceVMwareGuestInfo.py[DEBUG]: Decoding base64 format guestinfo.userdata 2019-09-23 13:43:21,536 - DataSourceVMwareGuestInfo.py[DEBUG]: Getting guestinfo value for key vendordata 2019-09-23 13:43:21,536 - util.py[DEBUG]: Running command ['/usr/bin/vmtoolsd', '--cmd', 'info-get guestinfo.vendordata'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:21,541 - DataSourceVMwareGuestInfo.py[DEBUG]: No value found for key vendordata 2019-09-23 13:43:21,542 - handlers.py[DEBUG]: finish: init-local/search-VMwareGuestInfo: SUCCESS: found local data from DataSourceVMwareGuestInfo 2019-09-23 13:43:21,542 - stages.py[INFO]: Loaded datasource DataSourceVMwareGuestInfo - DataSourceVMwareGuestInfo 2019-09-23 13:43:21,542 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg (quiet=False) 2019-09-23 13:43:21,542 - util.py[DEBUG]: Read 3180 bytes from /etc/cloud/cloud.cfg 2019-09-23 13:43:21,542 - util.py[DEBUG]: Attempting to load yaml from string of length 3180 with allowed root types (,) 2019-09-23 13:43:21,552 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/99-DataSourceVMwareGuestInfo.cfg (quiet=False) 2019-09-23 13:43:21,553 - util.py[DEBUG]: Read 549 bytes from /etc/cloud/cloud.cfg.d/99-DataSourceVMwareGuestInfo.cfg 2019-09-23 13:43:21,553 - util.py[DEBUG]: Attempting to load yaml from string of length 549 with allowed root types (,) 2019-09-23 13:43:21,553 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/05_logging.cfg (quiet=False) 2019-09-23 13:43:21,554 - util.py[DEBUG]: Read 2057 bytes from /etc/cloud/cloud.cfg.d/05_logging.cfg 2019-09-23 13:43:21,554 - util.py[DEBUG]: Attempting to load yaml from string of length 2057 with allowed root types (,) 2019-09-23 13:43:21,557 - util.py[DEBUG]: Reading from /run/cloud-init/cloud.cfg (quiet=False) 2019-09-23 13:43:21,557 - util.py[DEBUG]: Read 43 bytes from /run/cloud-init/cloud.cfg 2019-09-23 13:43:21,557 - util.py[DEBUG]: Attempting to load yaml from string of length 43 with allowed root types (,) 2019-09-23 13:43:21,557 - util.py[DEBUG]: Attempting to load yaml from string of length 0 with allowed root types (,) 2019-09-23 13:43:21,558 - util.py[DEBUG]: loaded blob returned None, returning default. 2019-09-23 13:43:21,558 - util.py[DEBUG]: Attempting to remove /var/lib/cloud/instance 2019-09-23 13:43:21,558 - util.py[DEBUG]: Creating symbolic link from '/var/lib/cloud/instance' => '/var/lib/cloud/instances/management-cluster-controlplane-0' 2019-09-23 13:43:21,560 - util.py[DEBUG]: Reading from /var/lib/cloud/instances/management-cluster-controlplane-0/datasource (quiet=False) 2019-09-23 13:43:21,560 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/datasource - wb: [644] 53 bytes 2019-09-23 13:43:21,560 - util.py[DEBUG]: Writing to /var/lib/cloud/data/previous-datasource - wb: [644] 53 bytes 2019-09-23 13:43:21,560 - util.py[DEBUG]: Reading from /var/lib/cloud/data/instance-id (quiet=False) 2019-09-23 13:43:21,560 - stages.py[DEBUG]: previous iid found to be NO_PREVIOUS_INSTANCE_ID 2019-09-23 13:43:21,561 - util.py[DEBUG]: Writing to /var/lib/cloud/data/instance-id - wb: [644] 34 bytes 2019-09-23 13:43:21,561 - util.py[DEBUG]: Writing to /run/cloud-init/.instance-id - wb: [644] 34 bytes 2019-09-23 13:43:21,561 - util.py[DEBUG]: Writing to /var/lib/cloud/data/previous-instance-id - wb: [644] 24 bytes 2019-09-23 13:43:21,562 - util.py[DEBUG]: Writing to /var/lib/cloud/instance/obj.pkl - wb: [400] 21365 bytes 2019-09-23 13:43:21,562 - main.py[DEBUG]: [local] init will now be targeting instance id: management-cluster-controlplane-0. new=True 2019-09-23 13:43:21,562 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg (quiet=False) 2019-09-23 13:43:21,562 - util.py[DEBUG]: Read 3180 bytes from /etc/cloud/cloud.cfg 2019-09-23 13:43:21,562 - util.py[DEBUG]: Attempting to load yaml from string of length 3180 with allowed root types (,) 2019-09-23 13:43:21,572 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/99-DataSourceVMwareGuestInfo.cfg (quiet=False) 2019-09-23 13:43:21,572 - util.py[DEBUG]: Read 549 bytes from /etc/cloud/cloud.cfg.d/99-DataSourceVMwareGuestInfo.cfg 2019-09-23 13:43:21,572 - util.py[DEBUG]: Attempting to load yaml from string of length 549 with allowed root types (,) 2019-09-23 13:43:21,573 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/05_logging.cfg (quiet=False) 2019-09-23 13:43:21,573 - util.py[DEBUG]: Read 2057 bytes from /etc/cloud/cloud.cfg.d/05_logging.cfg 2019-09-23 13:43:21,574 - util.py[DEBUG]: Attempting to load yaml from string of length 2057 with allowed root types (,) 2019-09-23 13:43:21,577 - util.py[DEBUG]: Reading from /run/cloud-init/cloud.cfg (quiet=False) 2019-09-23 13:43:21,577 - util.py[DEBUG]: Read 43 bytes from /run/cloud-init/cloud.cfg 2019-09-23 13:43:21,577 - util.py[DEBUG]: Attempting to load yaml from string of length 43 with allowed root types (,) 2019-09-23 13:43:21,577 - util.py[DEBUG]: Attempting to load yaml from string of length 0 with allowed root types (,) 2019-09-23 13:43:21,577 - util.py[DEBUG]: loaded blob returned None, returning default. 2019-09-23 13:43:21,578 - stages.py[DEBUG]: Using distro class 2019-09-23 13:43:21,579 - cc_set_hostname.py[DEBUG]: Setting the hostname to management-cluster-controlplane-0 (management-cluster-controlplane-0) 2019-09-23 13:43:21,579 - util.py[DEBUG]: Reading from /etc/hostname (quiet=False) 2019-09-23 13:43:21,579 - util.py[DEBUG]: Read 16 bytes from /etc/hostname 2019-09-23 13:43:21,579 - util.py[DEBUG]: Writing to /etc/hostname - wb: [644] 34 bytes 2019-09-23 13:43:21,580 - __init__.py[DEBUG]: Non-persistently setting the system hostname to management-cluster-controlplane-0 2019-09-23 13:43:21,580 - util.py[DEBUG]: Running command ['hostname', 'management-cluster-controlplane-0'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:21,583 - DataSourceVMwareGuestInfo.py[DEBUG]: using metadata network config 2019-09-23 13:43:21,583 - DataSourceVMwareGuestInfo.py[DEBUG]: using metadata network config 2019-09-23 13:43:21,583 - stages.py[DEBUG]: applying net config names for {'version': 2, 'ethernets': {'id0': {'match': {'macaddress': '00:50:56:97:70:bb'}, 'wakeonlan': True, 'dhcp4': False, 'dhcp6': False, 'addresses': ['10.92.159.240/22'], 'gateway4': '10.92.156.1', 'nameservers': {'addresses': ['10.145.139.41', '10.145.139.42']}}}} 2019-09-23 13:43:21,583 - __init__.py[DEBUG]: no interfaces to rename 2019-09-23 13:43:21,583 - stages.py[INFO]: Applying network configuration from ds bringup=False: {'version': 2, 'ethernets': {'id0': {'match': {'macaddress': '00:50:56:97:70:bb'}, 'wakeonlan': True, 'dhcp4': False, 'dhcp6': False, 'addresses': ['10.92.159.240/22'], 'gateway4': '10.92.156.1', 'nameservers': {'addresses': ['10.145.139.41', '10.145.139.42']}}}} 2019-09-23 13:43:21,586 - __init__.py[DEBUG]: Selected renderer 'netplan' from priority list: None 2019-09-23 13:43:21,586 - network_state.py[DEBUG]: v2(ethernets) -> v1(physical): {'type': 'physical', 'name': 'id0', 'mac_address': '00:50:56:97:70:bb', 'match': {'macaddress': '00:50:56:97:70:bb'}, 'wakeonlan': True, 'subnets': [{'type': 'dhcp4'}, {'type': 'dhcp6'}, {'type': 'static', 'address': '10.92.159.240/22', 'gateway': '10.92.156.1', 'dns_nameservers': ['10.145.139.41', '10.145.139.42']}]} 2019-09-23 13:43:21,589 - network_state.py[DEBUG]: v2_common: handling config: {'id0': {'match': {'macaddress': '00:50:56:97:70:bb'}, 'wakeonlan': True, 'dhcp4': False, 'dhcp6': False, 'addresses': ['10.92.159.240/22'], 'gateway4': '10.92.156.1', 'nameservers': {'addresses': ['10.145.139.41', '10.145.139.42']}}} 2019-09-23 13:43:21,589 - netplan.py[DEBUG]: V2 to V2 passthrough 2019-09-23 13:43:21,590 - util.py[DEBUG]: Writing to /etc/netplan/50-cloud-init.yaml - wb: [644] 703 bytes 2019-09-23 13:43:21,591 - util.py[DEBUG]: Running command ['netplan', 'generate'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:21,710 - util.py[DEBUG]: Running command ['udevadm', 'test-builtin', 'net_setup_link', '/sys/class/net/lo'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:21,713 - util.py[DEBUG]: Running command ['udevadm', 'test-builtin', 'net_setup_link', '/sys/class/net/ens192'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:21,716 - main.py[DEBUG]: [local] Exiting. datasource DataSourceVMwareGuestInfo not in local mode. 2019-09-23 13:43:21,716 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False) 2019-09-23 13:43:21,716 - util.py[DEBUG]: Read 12 bytes from /proc/uptime 2019-09-23 13:43:21,717 - util.py[DEBUG]: cloud-init mode 'init' took 4.305 seconds (4.30) 2019-09-23 13:43:21,717 - handlers.py[DEBUG]: finish: init-local: SUCCESS: searching for local datasources 2019-09-23 13:43:23,826 - util.py[DEBUG]: Cloud-init v. 18.3-52-gc5f78957-1~bddeb~18.04.1 running 'init' at Mon, 23 Sep 2019 13:43:23 +0000. Up 13.81 seconds. 2019-09-23 13:43:23,826 - main.py[DEBUG]: No kernel command line url found. 2019-09-23 13:43:23,826 - main.py[DEBUG]: Closing stdin. 2019-09-23 13:43:23,828 - util.py[DEBUG]: Writing to /var/log/cloud-init.log - ab: [644] 0 bytes 2019-09-23 13:43:23,828 - util.py[DEBUG]: Changing the ownership of /var/log/cloud-init.log to 102:4 2019-09-23 13:43:23,829 - util.py[DEBUG]: Running command ['ip', 'addr', 'show'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:23,864 - util.py[DEBUG]: Running command ['ip', '-o', 'route', 'list'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:23,866 - util.py[DEBUG]: Running command ['ip', '--oneline', '-6', 'route', 'list', 'table', 'all'] with allowed return codes [0, 1] (shell=False, capture=True) 2019-09-23 13:43:23,869 - main.py[DEBUG]: Checking to see if files that we need already exist from a previous run that would allow us to stop early. 2019-09-23 13:43:23,869 - main.py[DEBUG]: Execution continuing, no previous run detected that would allow us to stop early. 2019-09-23 13:43:23,869 - handlers.py[DEBUG]: start: init-network/check-cache: attempting to read from cache [trust] 2019-09-23 13:43:23,869 - util.py[DEBUG]: Reading from /var/lib/cloud/instance/obj.pkl (quiet=False) 2019-09-23 13:43:23,869 - util.py[DEBUG]: Read 21365 bytes from /var/lib/cloud/instance/obj.pkl 2019-09-23 13:43:23,874 - util.py[DEBUG]: Reading from /run/cloud-init/.instance-id (quiet=False) 2019-09-23 13:43:23,874 - util.py[DEBUG]: Read 34 bytes from /run/cloud-init/.instance-id 2019-09-23 13:43:23,874 - stages.py[DEBUG]: restored from cache with run check: DataSourceVMwareGuestInfo 2019-09-23 13:43:23,874 - handlers.py[DEBUG]: finish: init-network/check-cache: SUCCESS: restored from cache with run check: DataSourceVMwareGuestInfo 2019-09-23 13:43:23,874 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg (quiet=False) 2019-09-23 13:43:23,874 - util.py[DEBUG]: Read 3180 bytes from /etc/cloud/cloud.cfg 2019-09-23 13:43:23,874 - util.py[DEBUG]: Attempting to load yaml from string of length 3180 with allowed root types (,) 2019-09-23 13:43:23,885 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/99-DataSourceVMwareGuestInfo.cfg (quiet=False) 2019-09-23 13:43:23,885 - util.py[DEBUG]: Read 549 bytes from /etc/cloud/cloud.cfg.d/99-DataSourceVMwareGuestInfo.cfg 2019-09-23 13:43:23,885 - util.py[DEBUG]: Attempting to load yaml from string of length 549 with allowed root types (,) 2019-09-23 13:43:23,886 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/05_logging.cfg (quiet=False) 2019-09-23 13:43:23,886 - util.py[DEBUG]: Read 2057 bytes from /etc/cloud/cloud.cfg.d/05_logging.cfg 2019-09-23 13:43:23,886 - util.py[DEBUG]: Attempting to load yaml from string of length 2057 with allowed root types (,) 2019-09-23 13:43:23,889 - util.py[DEBUG]: Reading from /run/cloud-init/cloud.cfg (quiet=False) 2019-09-23 13:43:23,889 - util.py[DEBUG]: Read 43 bytes from /run/cloud-init/cloud.cfg 2019-09-23 13:43:23,889 - util.py[DEBUG]: Attempting to load yaml from string of length 43 with allowed root types (,) 2019-09-23 13:43:23,890 - util.py[DEBUG]: Attempting to load yaml from string of length 0 with allowed root types (,) 2019-09-23 13:43:23,890 - util.py[DEBUG]: loaded blob returned None, returning default. 2019-09-23 13:43:23,891 - util.py[DEBUG]: Attempting to remove /var/lib/cloud/instance 2019-09-23 13:43:23,891 - util.py[DEBUG]: Creating symbolic link from '/var/lib/cloud/instance' => '/var/lib/cloud/instances/management-cluster-controlplane-0' 2019-09-23 13:43:23,892 - util.py[DEBUG]: Reading from /var/lib/cloud/instances/management-cluster-controlplane-0/datasource (quiet=False) 2019-09-23 13:43:23,892 - util.py[DEBUG]: Read 53 bytes from /var/lib/cloud/instances/management-cluster-controlplane-0/datasource 2019-09-23 13:43:23,892 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/datasource - wb: [644] 53 bytes 2019-09-23 13:43:23,892 - util.py[DEBUG]: Writing to /var/lib/cloud/data/previous-datasource - wb: [644] 53 bytes 2019-09-23 13:43:23,893 - util.py[DEBUG]: Reading from /var/lib/cloud/data/instance-id (quiet=False) 2019-09-23 13:43:23,893 - util.py[DEBUG]: Read 34 bytes from /var/lib/cloud/data/instance-id 2019-09-23 13:43:23,893 - stages.py[DEBUG]: previous iid found to be management-cluster-controlplane-0 2019-09-23 13:43:23,893 - util.py[DEBUG]: Writing to /var/lib/cloud/data/instance-id - wb: [644] 34 bytes 2019-09-23 13:43:23,893 - util.py[DEBUG]: Writing to /run/cloud-init/.instance-id - wb: [644] 34 bytes 2019-09-23 13:43:23,894 - util.py[DEBUG]: Writing to /var/lib/cloud/data/previous-instance-id - wb: [644] 34 bytes 2019-09-23 13:43:23,894 - util.py[DEBUG]: Writing to /var/lib/cloud/instance/obj.pkl - wb: [400] 21487 bytes 2019-09-23 13:43:23,895 - main.py[DEBUG]: [net] init will now be targeting instance id: management-cluster-controlplane-0. new=False 2019-09-23 13:43:23,895 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg (quiet=False) 2019-09-23 13:43:23,895 - util.py[DEBUG]: Read 3180 bytes from /etc/cloud/cloud.cfg 2019-09-23 13:43:23,895 - util.py[DEBUG]: Attempting to load yaml from string of length 3180 with allowed root types (,) 2019-09-23 13:43:23,905 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/99-DataSourceVMwareGuestInfo.cfg (quiet=False) 2019-09-23 13:43:23,905 - util.py[DEBUG]: Read 549 bytes from /etc/cloud/cloud.cfg.d/99-DataSourceVMwareGuestInfo.cfg 2019-09-23 13:43:23,905 - util.py[DEBUG]: Attempting to load yaml from string of length 549 with allowed root types (,) 2019-09-23 13:43:23,906 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/05_logging.cfg (quiet=False) 2019-09-23 13:43:23,906 - util.py[DEBUG]: Read 2057 bytes from /etc/cloud/cloud.cfg.d/05_logging.cfg 2019-09-23 13:43:23,906 - util.py[DEBUG]: Attempting to load yaml from string of length 2057 with allowed root types (,) 2019-09-23 13:43:23,910 - util.py[DEBUG]: Reading from /run/cloud-init/cloud.cfg (quiet=False) 2019-09-23 13:43:23,910 - util.py[DEBUG]: Read 43 bytes from /run/cloud-init/cloud.cfg 2019-09-23 13:43:23,910 - util.py[DEBUG]: Attempting to load yaml from string of length 43 with allowed root types (,) 2019-09-23 13:43:23,910 - util.py[DEBUG]: Attempting to load yaml from string of length 0 with allowed root types (,) 2019-09-23 13:43:23,910 - util.py[DEBUG]: loaded blob returned None, returning default. 2019-09-23 13:43:23,911 - DataSourceVMwareGuestInfo.py[DEBUG]: using metadata network config 2019-09-23 13:43:23,911 - DataSourceVMwareGuestInfo.py[DEBUG]: using metadata network config 2019-09-23 13:43:23,912 - stages.py[DEBUG]: applying net config names for {'version': 2, 'ethernets': {'id0': {'match': {'macaddress': '00:50:56:97:70:bb'}, 'wakeonlan': True, 'dhcp4': False, 'dhcp6': False, 'addresses': ['10.92.159.240/22'], 'gateway4': '10.92.156.1', 'nameservers': {'addresses': ['10.145.139.41', '10.145.139.42']}}}} 2019-09-23 13:43:23,912 - stages.py[DEBUG]: Using distro class 2019-09-23 13:43:23,912 - __init__.py[DEBUG]: no interfaces to rename 2019-09-23 13:43:23,912 - __init__.py[DEBUG]: Datasource DataSourceVMwareGuestInfo not updated for events: System boot 2019-09-23 13:43:23,912 - stages.py[DEBUG]: No network config applied. Neither a new instance nor datasource network update on 'System boot' event 2019-09-23 13:43:23,912 - handlers.py[DEBUG]: start: init-network/setup-datasource: setting up datasource 2019-09-23 13:43:23,916 - DataSourceVMwareGuestInfo.py[INFO]: got host-info: {'network': {'interfaces': {'by-mac': OrderedDict([('00:50:56:97:70:bb', {'ipv4': [{'addr': '10.92.159.240', 'netmask': '255.255.252.0', 'broadcast': '10.92.159.255'}], 'ipv6': [{'addr': 'fe80::250:56ff:fe97:70bb%ens192', 'netmask': 'ffff:ffff:ffff:ffff::/64'}]})]), 'by-ipv4': OrderedDict([('10.92.159.240', {'netmask': '255.255.252.0', 'broadcast': '10.92.159.255', 'mac': '00:50:56:97:70:bb'})]), 'by-ipv6': OrderedDict([('fe80::250:56ff:fe97:70bb%ens192', {'netmask': 'ffff:ffff:ffff:ffff::/64', 'mac': '00:50:56:97:70:bb'})])}}, 'hostname': 'management-cluster-controlplane-0', 'local-hostname': 'management-cluster-controlplane-0', 'local-ipv4': '10.92.159.240', 'local-ipv6': 'fe80::250:56ff:fe97:70bb%ens192'} 2019-09-23 13:43:23,917 - handlers.py[DEBUG]: finish: init-network/setup-datasource: SUCCESS: setting up datasource 2019-09-23 13:43:23,917 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/user-data.txt - wb: [600] 15955 bytes 2019-09-23 13:43:23,920 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/user-data.txt.i - wb: [600] 16254 bytes 2019-09-23 13:43:23,921 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/vendor-data.txt - wb: [600] 0 bytes 2019-09-23 13:43:23,922 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/vendor-data.txt.i - wb: [600] 308 bytes 2019-09-23 13:43:23,922 - util.py[DEBUG]: Reading from /var/lib/cloud/data/set-hostname (quiet=False) 2019-09-23 13:43:23,923 - util.py[DEBUG]: Read 99 bytes from /var/lib/cloud/data/set-hostname 2019-09-23 13:43:23,923 - cc_set_hostname.py[DEBUG]: No hostname changes. Skipping set-hostname 2019-09-23 13:43:23,923 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/consume_data - wb: [644] 24 bytes 2019-09-23 13:43:23,923 - helpers.py[DEBUG]: Running consume_data using lock () 2019-09-23 13:43:23,923 - handlers.py[DEBUG]: start: init-network/consume-user-data: reading and applying user-data 2019-09-23 13:43:23,924 - stages.py[DEBUG]: Added default handler for {'text/cloud-config', 'text/cloud-config-jsonp'} from CloudConfigPartHandler: [['text/cloud-config', 'text/cloud-config-jsonp']] 2019-09-23 13:43:23,924 - stages.py[DEBUG]: Added default handler for {'text/x-shellscript'} from ShellScriptPartHandler: [['text/x-shellscript']] 2019-09-23 13:43:23,924 - stages.py[DEBUG]: Added default handler for {'text/cloud-boothook'} from BootHookPartHandler: [['text/cloud-boothook']] 2019-09-23 13:43:23,924 - stages.py[DEBUG]: Added default handler for {'text/upstart-job'} from UpstartJobPartHandler: [['text/upstart-job']] 2019-09-23 13:43:23,924 - stages.py[DEBUG]: Added default handler for {'text/jinja2'} from JinjaTemplatePartHandler: [['text/jinja2']] 2019-09-23 13:43:23,924 - __init__.py[DEBUG]: Calling handler CloudConfigPartHandler: [['text/cloud-config', 'text/cloud-config-jsonp']] (__begin__, None, 3) with frequency once-per-instance 2019-09-23 13:43:23,924 - __init__.py[DEBUG]: Calling handler ShellScriptPartHandler: [['text/x-shellscript']] (__begin__, None, 2) with frequency once-per-instance 2019-09-23 13:43:23,924 - __init__.py[DEBUG]: Calling handler BootHookPartHandler: [['text/cloud-boothook']] (__begin__, None, 2) with frequency once-per-instance 2019-09-23 13:43:23,924 - __init__.py[DEBUG]: Calling handler UpstartJobPartHandler: [['text/upstart-job']] (__begin__, None, 2) with frequency once-per-instance 2019-09-23 13:43:23,924 - __init__.py[DEBUG]: Calling handler JinjaTemplatePartHandler: [['text/jinja2']] (__begin__, None, 3) with frequency once-per-instance 2019-09-23 13:43:23,924 - __init__.py[DEBUG]: {'MIME-Version': '1.0', 'Content-Type': 'text/jinja2', 'Content-Disposition': 'attachment; filename="part-001"'} 2019-09-23 13:43:23,925 - __init__.py[DEBUG]: Calling handler JinjaTemplatePartHandler: [['text/jinja2']] (text/jinja2, part-001, 3) with frequency once-per-instance 2019-09-23 13:43:23,925 - util.py[DEBUG]: Reading from /run/cloud-init/instance-data.json (quiet=False) 2019-09-23 13:43:23,925 - util.py[DEBUG]: Read 18011 bytes from /run/cloud-init/instance-data.json 2019-09-23 13:43:23,933 - util.py[DEBUG]: Attempting to load yaml from string of length 15960 with allowed root types (,) 2019-09-23 13:43:23,952 - cloud_config.py[DEBUG]: Merging by applying [('dict', ['replace']), ('list', []), ('str', [])] 2019-09-23 13:43:23,952 - __init__.py[DEBUG]: Calling handler CloudConfigPartHandler: [['text/cloud-config', 'text/cloud-config-jsonp']] (__end__, None, 3) with frequency once-per-instance 2019-09-23 13:43:23,971 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/cloud-config.txt - wb: [600] 16347 bytes 2019-09-23 13:43:23,971 - __init__.py[DEBUG]: Calling handler ShellScriptPartHandler: [['text/x-shellscript']] (__end__, None, 2) with frequency once-per-instance 2019-09-23 13:43:23,971 - __init__.py[DEBUG]: Calling handler BootHookPartHandler: [['text/cloud-boothook']] (__end__, None, 2) with frequency once-per-instance 2019-09-23 13:43:23,971 - __init__.py[DEBUG]: Calling handler UpstartJobPartHandler: [['text/upstart-job']] (__end__, None, 2) with frequency once-per-instance 2019-09-23 13:43:23,971 - __init__.py[DEBUG]: Calling handler JinjaTemplatePartHandler: [['text/jinja2']] (__end__, None, 3) with frequency once-per-instance 2019-09-23 13:43:23,971 - handlers.py[DEBUG]: finish: init-network/consume-user-data: SUCCESS: reading and applying user-data 2019-09-23 13:43:23,971 - handlers.py[DEBUG]: start: init-network/consume-vendor-data: reading and applying vendor-data 2019-09-23 13:43:23,972 - stages.py[DEBUG]: no vendordata from datasource 2019-09-23 13:43:23,972 - handlers.py[DEBUG]: finish: init-network/consume-vendor-data: SUCCESS: reading and applying vendor-data 2019-09-23 13:43:23,972 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg (quiet=False) 2019-09-23 13:43:23,972 - util.py[DEBUG]: Read 3180 bytes from /etc/cloud/cloud.cfg 2019-09-23 13:43:23,972 - util.py[DEBUG]: Attempting to load yaml from string of length 3180 with allowed root types (,) 2019-09-23 13:43:23,982 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/99-DataSourceVMwareGuestInfo.cfg (quiet=False) 2019-09-23 13:43:23,982 - util.py[DEBUG]: Read 549 bytes from /etc/cloud/cloud.cfg.d/99-DataSourceVMwareGuestInfo.cfg 2019-09-23 13:43:23,982 - util.py[DEBUG]: Attempting to load yaml from string of length 549 with allowed root types (,) 2019-09-23 13:43:23,983 - util.py[DEBUG]: Reading from /etc/cloud/cloud.cfg.d/05_logging.cfg (quiet=False) 2019-09-23 13:43:23,983 - util.py[DEBUG]: Read 2057 bytes from /etc/cloud/cloud.cfg.d/05_logging.cfg 2019-09-23 13:43:23,983 - util.py[DEBUG]: Attempting to load yaml from string of length 2057 with allowed root types (,) 2019-09-23 13:43:23,986 - util.py[DEBUG]: Reading from /run/cloud-init/cloud.cfg (quiet=False) 2019-09-23 13:43:23,987 - util.py[DEBUG]: Read 43 bytes from /run/cloud-init/cloud.cfg 2019-09-23 13:43:23,987 - util.py[DEBUG]: Attempting to load yaml from string of length 43 with allowed root types (,) 2019-09-23 13:43:23,987 - util.py[DEBUG]: Attempting to load yaml from string of length 0 with allowed root types (,) 2019-09-23 13:43:23,987 - util.py[DEBUG]: loaded blob returned None, returning default. 2019-09-23 13:43:23,987 - util.py[DEBUG]: Reading from /var/lib/cloud/instance/cloud-config.txt (quiet=False) 2019-09-23 13:43:23,988 - util.py[DEBUG]: Read 16347 bytes from /var/lib/cloud/instance/cloud-config.txt 2019-09-23 13:43:23,988 - util.py[DEBUG]: Attempting to load yaml from string of length 16347 with allowed root types (,) 2019-09-23 13:43:24,009 - util.py[DEBUG]: Reading from /var/lib/cloud/instance/cloud-config.txt (quiet=False) 2019-09-23 13:43:24,009 - util.py[DEBUG]: Read 16347 bytes from /var/lib/cloud/instance/cloud-config.txt 2019-09-23 13:43:24,009 - util.py[DEBUG]: Attempting to load yaml from string of length 16347 with allowed root types (,) 2019-09-23 13:43:24,031 - handlers.py[DEBUG]: start: init-network/activate-datasource: activating datasource 2019-09-23 13:43:24,032 - util.py[DEBUG]: Writing to /var/lib/cloud/instance/obj.pkl - wb: [400] 37479 bytes 2019-09-23 13:43:24,033 - handlers.py[DEBUG]: finish: init-network/activate-datasource: SUCCESS: activating datasource 2019-09-23 13:43:24,033 - main.py[DEBUG]: no di_report found in config. 2019-09-23 13:43:24,048 - stages.py[DEBUG]: Using distro class 2019-09-23 13:43:24,048 - stages.py[DEBUG]: Running module migrator () with frequency always 2019-09-23 13:43:24,048 - handlers.py[DEBUG]: start: init-network/config-migrator: running config-migrator with frequency always 2019-09-23 13:43:24,048 - helpers.py[DEBUG]: Running config-migrator using lock () 2019-09-23 13:43:24,049 - cc_migrator.py[DEBUG]: Migrated 0 semaphore files to there canonicalized names 2019-09-23 13:43:24,049 - handlers.py[DEBUG]: finish: init-network/config-migrator: SUCCESS: config-migrator ran successfully 2019-09-23 13:43:24,049 - stages.py[DEBUG]: Running module seed_random () with frequency once-per-instance 2019-09-23 13:43:24,049 - handlers.py[DEBUG]: start: init-network/config-seed_random: running config-seed_random with frequency once-per-instance 2019-09-23 13:43:24,049 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_seed_random - wb: [644] 23 bytes 2019-09-23 13:43:24,050 - helpers.py[DEBUG]: Running config-seed_random using lock () 2019-09-23 13:43:24,050 - cc_seed_random.py[DEBUG]: no command provided 2019-09-23 13:43:24,050 - handlers.py[DEBUG]: finish: init-network/config-seed_random: SUCCESS: config-seed_random ran successfully 2019-09-23 13:43:24,050 - stages.py[DEBUG]: Running module bootcmd () with frequency always 2019-09-23 13:43:24,050 - handlers.py[DEBUG]: start: init-network/config-bootcmd: running config-bootcmd with frequency always 2019-09-23 13:43:24,050 - helpers.py[DEBUG]: Running config-bootcmd using lock () 2019-09-23 13:43:24,050 - cc_bootcmd.py[DEBUG]: Skipping module named bootcmd, no 'bootcmd' key in configuration 2019-09-23 13:43:24,050 - handlers.py[DEBUG]: finish: init-network/config-bootcmd: SUCCESS: config-bootcmd ran successfully 2019-09-23 13:43:24,050 - stages.py[DEBUG]: Running module write-files () with frequency once-per-instance 2019-09-23 13:43:24,051 - handlers.py[DEBUG]: start: init-network/config-write-files: running config-write-files with frequency once-per-instance 2019-09-23 13:43:24,051 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_write_files - wb: [644] 24 bytes 2019-09-23 13:43:24,051 - helpers.py[DEBUG]: Running config-write-files using lock () 2019-09-23 13:43:24,052 - util.py[DEBUG]: Writing to /etc/kubernetes/pki/ca.crt - wb: [640] 1029 bytes 2019-09-23 13:43:24,053 - util.py[DEBUG]: Changing the ownership of /etc/kubernetes/pki/ca.crt to 0:0 2019-09-23 13:43:24,053 - util.py[DEBUG]: Writing to /etc/kubernetes/pki/ca.key - wb: [600] 1679 bytes 2019-09-23 13:43:24,053 - util.py[DEBUG]: Changing the ownership of /etc/kubernetes/pki/ca.key to 0:0 2019-09-23 13:43:24,053 - util.py[DEBUG]: Writing to /etc/kubernetes/pki/etcd/ca.crt - wb: [640] 1029 bytes 2019-09-23 13:43:24,054 - util.py[DEBUG]: Changing the ownership of /etc/kubernetes/pki/etcd/ca.crt to 0:0 2019-09-23 13:43:24,054 - util.py[DEBUG]: Writing to /etc/kubernetes/pki/etcd/ca.key - wb: [600] 1679 bytes 2019-09-23 13:43:24,054 - util.py[DEBUG]: Changing the ownership of /etc/kubernetes/pki/etcd/ca.key to 0:0 2019-09-23 13:43:24,054 - util.py[DEBUG]: Writing to /etc/kubernetes/pki/front-proxy-ca.crt - wb: [640] 1029 bytes 2019-09-23 13:43:24,055 - util.py[DEBUG]: Changing the ownership of /etc/kubernetes/pki/front-proxy-ca.crt to 0:0 2019-09-23 13:43:24,055 - util.py[DEBUG]: Writing to /etc/kubernetes/pki/front-proxy-ca.key - wb: [600] 1679 bytes 2019-09-23 13:43:24,055 - util.py[DEBUG]: Changing the ownership of /etc/kubernetes/pki/front-proxy-ca.key to 0:0 2019-09-23 13:43:24,055 - util.py[DEBUG]: Writing to /etc/kubernetes/pki/sa.pub - wb: [640] 451 bytes 2019-09-23 13:43:24,056 - util.py[DEBUG]: Changing the ownership of /etc/kubernetes/pki/sa.pub to 0:0 2019-09-23 13:43:24,056 - util.py[DEBUG]: Writing to /etc/kubernetes/pki/sa.key - wb: [600] 1675 bytes 2019-09-23 13:43:24,057 - util.py[DEBUG]: Changing the ownership of /etc/kubernetes/pki/sa.key to 0:0 2019-09-23 13:43:24,057 - util.py[DEBUG]: Writing to /etc/kubernetes/vsphere.conf - wb: [600] 412 bytes 2019-09-23 13:43:24,057 - util.py[DEBUG]: Changing the ownership of /etc/kubernetes/vsphere.conf to 0:0 2019-09-23 13:43:24,057 - util.py[DEBUG]: Writing to /tmp/kubeadm.yaml - wb: [640] 1126 bytes 2019-09-23 13:43:24,057 - util.py[DEBUG]: Changing the ownership of /tmp/kubeadm.yaml to 0:0 2019-09-23 13:43:24,058 - handlers.py[DEBUG]: finish: init-network/config-write-files: SUCCESS: config-write-files ran successfully 2019-09-23 13:43:24,058 - stages.py[DEBUG]: Running module growpart () with frequency always 2019-09-23 13:43:24,058 - handlers.py[DEBUG]: start: init-network/config-growpart: running config-growpart with frequency always 2019-09-23 13:43:24,058 - helpers.py[DEBUG]: Running config-growpart using lock () 2019-09-23 13:43:24,058 - cc_growpart.py[DEBUG]: No 'growpart' entry in cfg. Using default: {'mode': 'auto', 'devices': ['/'], 'ignore_growroot_disabled': False} 2019-09-23 13:43:24,058 - util.py[DEBUG]: Running command ['growpart', '--help'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:24,066 - util.py[DEBUG]: Reading from /proc/636/mountinfo (quiet=False) 2019-09-23 13:43:24,067 - util.py[DEBUG]: Read 2966 bytes from /proc/636/mountinfo 2019-09-23 13:43:24,067 - util.py[DEBUG]: Running command ['systemd-detect-virt', '--quiet', '--container'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:24,070 - util.py[DEBUG]: Running command ['running-in-container'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:24,071 - util.py[DEBUG]: Running command ['lxc-is-container'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:24,073 - util.py[DEBUG]: Reading from /proc/1/environ (quiet=False) 2019-09-23 13:43:24,073 - util.py[DEBUG]: Read 176 bytes from /proc/1/environ 2019-09-23 13:43:24,073 - util.py[DEBUG]: Reading from /proc/self/status (quiet=False) 2019-09-23 13:43:24,073 - util.py[DEBUG]: Read 1291 bytes from /proc/self/status 2019-09-23 13:43:24,074 - util.py[DEBUG]: Reading from /etc/os-release (quiet=False) 2019-09-23 13:43:24,075 - util.py[DEBUG]: Read 386 bytes from /etc/os-release 2019-09-23 13:43:24,075 - util.py[DEBUG]: Reading from /sys/class/block/sda1/partition (quiet=False) 2019-09-23 13:43:24,075 - util.py[DEBUG]: Read 2 bytes from /sys/class/block/sda1/partition 2019-09-23 13:43:24,075 - util.py[DEBUG]: Reading from /sys/devices/pci0000:00/0000:00:10.0/host2/target2:0:0/2:0:0:0/block/sda/dev (quiet=False) 2019-09-23 13:43:24,076 - util.py[DEBUG]: Read 4 bytes from /sys/devices/pci0000:00/0000:00:10.0/host2/target2:0:0/2:0:0:0/block/sda/dev 2019-09-23 13:43:24,076 - util.py[DEBUG]: Running command ['growpart', '--dry-run', '/dev/sda', '1'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:24,176 - util.py[DEBUG]: resize_devices took 0.110 seconds 2019-09-23 13:43:24,176 - cc_growpart.py[DEBUG]: '/' NOCHANGE: no change necessary (/dev/sda, 1) 2019-09-23 13:43:24,177 - handlers.py[DEBUG]: finish: init-network/config-growpart: SUCCESS: config-growpart ran successfully 2019-09-23 13:43:24,177 - stages.py[DEBUG]: Running module resizefs () with frequency always 2019-09-23 13:43:24,177 - handlers.py[DEBUG]: start: init-network/config-resizefs: running config-resizefs with frequency always 2019-09-23 13:43:24,177 - helpers.py[DEBUG]: Running config-resizefs using lock () 2019-09-23 13:43:24,212 - util.py[DEBUG]: Reading from /proc/636/mountinfo (quiet=False) 2019-09-23 13:43:24,212 - util.py[DEBUG]: Read 2966 bytes from /proc/636/mountinfo 2019-09-23 13:43:24,212 - cc_resizefs.py[DEBUG]: resize_info: dev=/dev/sda1 mnt_point=/ path=/ 2019-09-23 13:43:24,212 - util.py[DEBUG]: Running command ['systemd-detect-virt', '--quiet', '--container'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:24,215 - util.py[DEBUG]: Running command ['running-in-container'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:24,217 - util.py[DEBUG]: Running command ['lxc-is-container'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:24,218 - util.py[DEBUG]: Reading from /proc/1/environ (quiet=False) 2019-09-23 13:43:24,218 - util.py[DEBUG]: Read 176 bytes from /proc/1/environ 2019-09-23 13:43:24,218 - util.py[DEBUG]: Reading from /proc/self/status (quiet=False) 2019-09-23 13:43:24,218 - util.py[DEBUG]: Read 1291 bytes from /proc/self/status 2019-09-23 13:43:24,219 - cc_resizefs.py[DEBUG]: Resizing / (ext4) using resize2fs /dev/sda1 2019-09-23 13:43:24,219 - util.py[DEBUG]: Running command ('resize2fs', '/dev/sda1') with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:24,782 - util.py[DEBUG]: Resizing took 0.563 seconds 2019-09-23 13:43:24,782 - cc_resizefs.py[DEBUG]: Resized root filesystem (type=ext4, val=True) 2019-09-23 13:43:24,782 - handlers.py[DEBUG]: finish: init-network/config-resizefs: SUCCESS: config-resizefs ran successfully 2019-09-23 13:43:24,782 - stages.py[DEBUG]: Running module disk_setup () with frequency once-per-instance 2019-09-23 13:43:24,783 - handlers.py[DEBUG]: start: init-network/config-disk_setup: running config-disk_setup with frequency once-per-instance 2019-09-23 13:43:24,783 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_disk_setup - wb: [644] 24 bytes 2019-09-23 13:43:24,784 - helpers.py[DEBUG]: Running config-disk_setup using lock () 2019-09-23 13:43:24,784 - handlers.py[DEBUG]: finish: init-network/config-disk_setup: SUCCESS: config-disk_setup ran successfully 2019-09-23 13:43:24,784 - stages.py[DEBUG]: Running module mounts () with frequency once-per-instance 2019-09-23 13:43:24,784 - handlers.py[DEBUG]: start: init-network/config-mounts: running config-mounts with frequency once-per-instance 2019-09-23 13:43:24,784 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_mounts - wb: [644] 24 bytes 2019-09-23 13:43:24,785 - helpers.py[DEBUG]: Running config-mounts using lock () 2019-09-23 13:43:24,785 - cc_mounts.py[DEBUG]: mounts configuration is [] 2019-09-23 13:43:24,785 - util.py[DEBUG]: Reading from /etc/fstab (quiet=False) 2019-09-23 13:43:24,785 - util.py[DEBUG]: Read 551 bytes from /etc/fstab 2019-09-23 13:43:24,785 - cc_mounts.py[DEBUG]: Attempting to determine the real name of ephemeral0 2019-09-23 13:43:24,785 - cc_mounts.py[DEBUG]: changed default device ephemeral0 => None 2019-09-23 13:43:24,785 - cc_mounts.py[DEBUG]: Ignoring nonexistent default named mount ephemeral0 2019-09-23 13:43:24,785 - cc_mounts.py[DEBUG]: Attempting to determine the real name of swap 2019-09-23 13:43:24,785 - cc_mounts.py[DEBUG]: changed default device swap => None 2019-09-23 13:43:24,785 - cc_mounts.py[DEBUG]: Ignoring nonexistent default named mount swap 2019-09-23 13:43:24,785 - cc_mounts.py[DEBUG]: no need to setup swap 2019-09-23 13:43:24,786 - cc_mounts.py[DEBUG]: No modifications to fstab needed 2019-09-23 13:43:24,786 - handlers.py[DEBUG]: finish: init-network/config-mounts: SUCCESS: config-mounts ran successfully 2019-09-23 13:43:24,786 - stages.py[DEBUG]: Running module set_hostname () with frequency once-per-instance 2019-09-23 13:43:24,786 - handlers.py[DEBUG]: start: init-network/config-set_hostname: running config-set_hostname with frequency once-per-instance 2019-09-23 13:43:24,786 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_set_hostname - wb: [644] 24 bytes 2019-09-23 13:43:24,786 - helpers.py[DEBUG]: Running config-set_hostname using lock () 2019-09-23 13:43:24,786 - util.py[DEBUG]: Reading from /var/lib/cloud/data/set-hostname (quiet=False) 2019-09-23 13:43:24,787 - util.py[DEBUG]: Read 99 bytes from /var/lib/cloud/data/set-hostname 2019-09-23 13:43:24,787 - cc_set_hostname.py[DEBUG]: No hostname changes. Skipping set-hostname 2019-09-23 13:43:24,787 - handlers.py[DEBUG]: finish: init-network/config-set_hostname: SUCCESS: config-set_hostname ran successfully 2019-09-23 13:43:24,787 - stages.py[DEBUG]: Running module update_hostname () with frequency always 2019-09-23 13:43:24,787 - handlers.py[DEBUG]: start: init-network/config-update_hostname: running config-update_hostname with frequency always 2019-09-23 13:43:24,787 - helpers.py[DEBUG]: Running config-update_hostname using lock () 2019-09-23 13:43:24,787 - cc_update_hostname.py[DEBUG]: Updating hostname to management-cluster-controlplane-0 (management-cluster-controlplane-0) 2019-09-23 13:43:24,787 - util.py[DEBUG]: Reading from /etc/hostname (quiet=False) 2019-09-23 13:43:24,787 - util.py[DEBUG]: Read 34 bytes from /etc/hostname 2019-09-23 13:43:24,787 - __init__.py[DEBUG]: Attempting to update hostname to management-cluster-controlplane-0 in 1 files 2019-09-23 13:43:24,787 - util.py[DEBUG]: Reading from /var/lib/cloud/data/previous-hostname (quiet=False) 2019-09-23 13:43:24,788 - util.py[DEBUG]: Writing to /var/lib/cloud/data/previous-hostname - wb: [644] 34 bytes 2019-09-23 13:43:24,788 - handlers.py[DEBUG]: finish: init-network/config-update_hostname: SUCCESS: config-update_hostname ran successfully 2019-09-23 13:43:24,788 - stages.py[DEBUG]: Running module update_etc_hosts () with frequency always 2019-09-23 13:43:24,788 - handlers.py[DEBUG]: start: init-network/config-update_etc_hosts: running config-update_etc_hosts with frequency always 2019-09-23 13:43:24,788 - helpers.py[DEBUG]: Running config-update_etc_hosts using lock () 2019-09-23 13:43:24,788 - cc_update_etc_hosts.py[DEBUG]: Configuration option 'manage_etc_hosts' is not set, not managing /etc/hosts in module update_etc_hosts 2019-09-23 13:43:24,788 - handlers.py[DEBUG]: finish: init-network/config-update_etc_hosts: SUCCESS: config-update_etc_hosts ran successfully 2019-09-23 13:43:24,788 - stages.py[DEBUG]: Running module ca-certs () with frequency once-per-instance 2019-09-23 13:43:24,789 - handlers.py[DEBUG]: start: init-network/config-ca-certs: running config-ca-certs with frequency once-per-instance 2019-09-23 13:43:24,789 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_ca_certs - wb: [644] 24 bytes 2019-09-23 13:43:24,789 - helpers.py[DEBUG]: Running config-ca-certs using lock () 2019-09-23 13:43:24,789 - cc_ca_certs.py[DEBUG]: Skipping module named ca-certs, no 'ca-certs' key in configuration 2019-09-23 13:43:24,789 - handlers.py[DEBUG]: finish: init-network/config-ca-certs: SUCCESS: config-ca-certs ran successfully 2019-09-23 13:43:24,789 - stages.py[DEBUG]: Running module rsyslog () with frequency once-per-instance 2019-09-23 13:43:24,790 - handlers.py[DEBUG]: start: init-network/config-rsyslog: running config-rsyslog with frequency once-per-instance 2019-09-23 13:43:24,790 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_rsyslog - wb: [644] 23 bytes 2019-09-23 13:43:24,790 - helpers.py[DEBUG]: Running config-rsyslog using lock () 2019-09-23 13:43:24,790 - cc_rsyslog.py[DEBUG]: Skipping module named rsyslog, no 'rsyslog' key in configuration 2019-09-23 13:43:24,790 - handlers.py[DEBUG]: finish: init-network/config-rsyslog: SUCCESS: config-rsyslog ran successfully 2019-09-23 13:43:24,790 - stages.py[DEBUG]: Running module users-groups () with frequency once-per-instance 2019-09-23 13:43:24,791 - handlers.py[DEBUG]: start: init-network/config-users-groups: running config-users-groups with frequency once-per-instance 2019-09-23 13:43:24,791 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_users_groups - wb: [644] 23 bytes 2019-09-23 13:43:24,791 - helpers.py[DEBUG]: Running config-users-groups using lock () 2019-09-23 13:43:24,792 - util.py[DEBUG]: Reading from /etc/os-release (quiet=True) 2019-09-23 13:43:24,792 - util.py[DEBUG]: Read 386 bytes from /etc/os-release 2019-09-23 13:43:24,792 - util.py[DEBUG]: Reading from /etc/system-image/channel.ini (quiet=True) 2019-09-23 13:43:24,792 - util.py[DEBUG]: Read 0 bytes from /etc/system-image/channel.ini 2019-09-23 13:43:24,792 - __init__.py[DEBUG]: Adding user capv 2019-09-23 13:43:24,792 - util.py[DEBUG]: Running hidden command to protect sensitive input/output logstring: ['useradd', 'capv', '-m'] 2019-09-23 13:43:25,114 - util.py[DEBUG]: Running command ['passwd', '-l', 'capv'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:25,161 - util.py[DEBUG]: Reading from /etc/sudoers (quiet=False) 2019-09-23 13:43:25,167 - util.py[DEBUG]: Read 755 bytes from /etc/sudoers 2019-09-23 13:43:25,168 - util.py[DEBUG]: Writing to /etc/sudoers.d/90-cloud-init-users - wb: [440] 147 bytes 2019-09-23 13:43:25,168 - util.py[DEBUG]: Changing the ownership of /home/capv/.ssh to 1001:1001 2019-09-23 13:43:25,169 - util.py[DEBUG]: Reading from /etc/ssh/sshd_config (quiet=False) 2019-09-23 13:43:25,173 - util.py[DEBUG]: Read 3264 bytes from /etc/ssh/sshd_config 2019-09-23 13:43:25,173 - util.py[DEBUG]: Writing to /home/capv/.ssh/authorized_keys - wb: [600] 463 bytes 2019-09-23 13:43:25,174 - util.py[DEBUG]: Changing the ownership of /home/capv/.ssh/authorized_keys to 1001:1001 2019-09-23 13:43:25,174 - util.py[DEBUG]: Reading from /etc/os-release (quiet=True) 2019-09-23 13:43:25,174 - util.py[DEBUG]: Read 386 bytes from /etc/os-release 2019-09-23 13:43:25,174 - util.py[DEBUG]: Reading from /etc/system-image/channel.ini (quiet=True) 2019-09-23 13:43:25,175 - util.py[DEBUG]: Read 0 bytes from /etc/system-image/channel.ini 2019-09-23 13:43:25,175 - __init__.py[DEBUG]: Adding user maxr 2019-09-23 13:43:25,175 - util.py[DEBUG]: Running hidden command to protect sensitive input/output logstring: ['useradd', 'maxr', '-m'] 2019-09-23 13:43:25,357 - util.py[DEBUG]: Running command ['passwd', '-l', 'maxr'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:25,393 - util.py[DEBUG]: Reading from /etc/sudoers (quiet=False) 2019-09-23 13:43:25,393 - util.py[DEBUG]: Read 755 bytes from /etc/sudoers 2019-09-23 13:43:25,394 - util.py[DEBUG]: Writing to /etc/sudoers.d/90-cloud-init-users - ab: [None] 51 bytes 2019-09-23 13:43:25,394 - util.py[DEBUG]: Changing the ownership of /home/maxr/.ssh to 1002:1002 2019-09-23 13:43:25,395 - util.py[DEBUG]: Reading from /etc/ssh/sshd_config (quiet=False) 2019-09-23 13:43:25,395 - util.py[DEBUG]: Read 3264 bytes from /etc/ssh/sshd_config 2019-09-23 13:43:25,395 - util.py[DEBUG]: Writing to /home/maxr/.ssh/authorized_keys - wb: [600] 392 bytes 2019-09-23 13:43:25,396 - util.py[DEBUG]: Changing the ownership of /home/maxr/.ssh/authorized_keys to 1002:1002 2019-09-23 13:43:25,396 - handlers.py[DEBUG]: finish: init-network/config-users-groups: SUCCESS: config-users-groups ran successfully 2019-09-23 13:43:25,396 - stages.py[DEBUG]: Running module ssh () with frequency once-per-instance 2019-09-23 13:43:25,396 - handlers.py[DEBUG]: start: init-network/config-ssh: running config-ssh with frequency once-per-instance 2019-09-23 13:43:25,396 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_ssh - wb: [644] 23 bytes 2019-09-23 13:43:25,397 - helpers.py[DEBUG]: Running config-ssh using lock () 2019-09-23 13:43:25,397 - util.py[DEBUG]: Running command ['ssh-keygen', '-t', 'rsa', '-N', '', '-f', '/etc/ssh/ssh_host_rsa_key'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:25,521 - util.py[DEBUG]: Running command ['ssh-keygen', '-t', 'dsa', '-N', '', '-f', '/etc/ssh/ssh_host_dsa_key'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:25,676 - util.py[DEBUG]: Running command ['ssh-keygen', '-t', 'ecdsa', '-N', '', '-f', '/etc/ssh/ssh_host_ecdsa_key'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:25,680 - util.py[DEBUG]: Running command ['ssh-keygen', '-t', 'ed25519', '-N', '', '-f', '/etc/ssh/ssh_host_ed25519_key'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:25,685 - util.py[DEBUG]: Changing the ownership of /root/.ssh to 0:0 2019-09-23 13:43:25,686 - util.py[DEBUG]: Reading from /etc/ssh/sshd_config (quiet=False) 2019-09-23 13:43:25,686 - util.py[DEBUG]: Read 3264 bytes from /etc/ssh/sshd_config 2019-09-23 13:43:25,686 - util.py[DEBUG]: Writing to /root/.ssh/authorized_keys - wb: [600] 0 bytes 2019-09-23 13:43:25,687 - util.py[DEBUG]: Changing the ownership of /root/.ssh/authorized_keys to 0:0 2019-09-23 13:43:25,687 - handlers.py[DEBUG]: finish: init-network/config-ssh: SUCCESS: config-ssh ran successfully 2019-09-23 13:43:25,687 - main.py[DEBUG]: Ran 15 modules with 0 failures 2019-09-23 13:43:25,687 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False) 2019-09-23 13:43:25,688 - util.py[DEBUG]: Read 12 bytes from /proc/uptime 2019-09-23 13:43:25,688 - util.py[DEBUG]: cloud-init mode 'init' took 1.889 seconds (1.89) 2019-09-23 13:43:25,688 - handlers.py[DEBUG]: finish: init-network: SUCCESS: searching for network datasources 2019-09-23 13:43:26,646 - util.py[DEBUG]: Cloud-init v. 18.3-52-gc5f78957-1~bddeb~18.04.1 running 'modules:config' at Mon, 23 Sep 2019 13:43:26 +0000. Up 16.54 seconds. 2019-09-23 13:43:26,970 - stages.py[DEBUG]: Using distro class 2019-09-23 13:43:26,970 - stages.py[DEBUG]: Running module emit_upstart () with frequency always 2019-09-23 13:43:26,971 - handlers.py[DEBUG]: start: modules-config/config-emit_upstart: running config-emit_upstart with frequency always 2019-09-23 13:43:26,971 - helpers.py[DEBUG]: Running config-emit_upstart using lock () 2019-09-23 13:43:26,971 - cc_emit_upstart.py[DEBUG]: no /sbin/initctl located 2019-09-23 13:43:26,971 - cc_emit_upstart.py[DEBUG]: not upstart system, 'emit_upstart' disabled 2019-09-23 13:43:26,971 - handlers.py[DEBUG]: finish: modules-config/config-emit_upstart: SUCCESS: config-emit_upstart ran successfully 2019-09-23 13:43:26,971 - stages.py[DEBUG]: Running module snap () with frequency once-per-instance 2019-09-23 13:43:26,971 - handlers.py[DEBUG]: start: modules-config/config-snap: running config-snap with frequency once-per-instance 2019-09-23 13:43:26,972 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_snap - wb: [644] 23 bytes 2019-09-23 13:43:26,972 - helpers.py[DEBUG]: Running config-snap using lock () 2019-09-23 13:43:26,972 - cc_snap.py[DEBUG]: Skipping module named snap, no 'snap' key in configuration 2019-09-23 13:43:26,972 - handlers.py[DEBUG]: finish: modules-config/config-snap: SUCCESS: config-snap ran successfully 2019-09-23 13:43:26,972 - stages.py[DEBUG]: Running module snap_config () with frequency once-per-instance 2019-09-23 13:43:26,972 - handlers.py[DEBUG]: start: modules-config/config-snap_config: running config-snap_config with frequency once-per-instance 2019-09-23 13:43:26,972 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_snap_config - wb: [644] 23 bytes 2019-09-23 13:43:26,973 - helpers.py[DEBUG]: Running config-snap_config using lock () 2019-09-23 13:43:26,973 - cc_snap_config.py[DEBUG]: No snappy config provided, skipping 2019-09-23 13:43:26,973 - handlers.py[DEBUG]: finish: modules-config/config-snap_config: SUCCESS: config-snap_config ran successfully 2019-09-23 13:43:26,973 - stages.py[DEBUG]: Running module ssh-import-id () with frequency once-per-instance 2019-09-23 13:43:26,973 - handlers.py[DEBUG]: start: modules-config/config-ssh-import-id: running config-ssh-import-id with frequency once-per-instance 2019-09-23 13:43:26,973 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_ssh_import_id - wb: [644] 24 bytes 2019-09-23 13:43:26,974 - helpers.py[DEBUG]: Running config-ssh-import-id using lock () 2019-09-23 13:43:26,974 - cc_ssh_import_id.py[DEBUG]: User capv is not configured for ssh_import_id 2019-09-23 13:43:26,974 - cc_ssh_import_id.py[DEBUG]: User maxr is not configured for ssh_import_id 2019-09-23 13:43:26,974 - handlers.py[DEBUG]: finish: modules-config/config-ssh-import-id: SUCCESS: config-ssh-import-id ran successfully 2019-09-23 13:43:26,974 - stages.py[DEBUG]: Running module locale () with frequency once-per-instance 2019-09-23 13:43:26,974 - handlers.py[DEBUG]: start: modules-config/config-locale: running config-locale with frequency once-per-instance 2019-09-23 13:43:26,974 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_locale - wb: [644] 23 bytes 2019-09-23 13:43:26,975 - helpers.py[DEBUG]: Running config-locale using lock () 2019-09-23 13:43:26,975 - util.py[DEBUG]: Reading from /etc/default/locale (quiet=False) 2019-09-23 13:43:26,975 - util.py[DEBUG]: Read 36 bytes from /etc/default/locale 2019-09-23 13:43:26,975 - cc_locale.py[DEBUG]: Setting locale to en_US.utf8 2019-09-23 13:43:26,975 - debian.py[DEBUG]: System has 'LANG=en_US.utf8' requested 'en_US.utf8', skipping regeneration. 2019-09-23 13:43:26,975 - handlers.py[DEBUG]: finish: modules-config/config-locale: SUCCESS: config-locale ran successfully 2019-09-23 13:43:26,975 - stages.py[DEBUG]: Running module set-passwords () with frequency once-per-instance 2019-09-23 13:43:26,976 - handlers.py[DEBUG]: start: modules-config/config-set-passwords: running config-set-passwords with frequency once-per-instance 2019-09-23 13:43:26,976 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_set_passwords - wb: [644] 24 bytes 2019-09-23 13:43:26,976 - helpers.py[DEBUG]: Running config-set-passwords using lock () 2019-09-23 13:43:26,976 - cc_set_passwords.py[DEBUG]: Leaving ssh config 'PasswordAuthentication' unchanged. ssh_pwauth=None 2019-09-23 13:43:26,976 - handlers.py[DEBUG]: finish: modules-config/config-set-passwords: SUCCESS: config-set-passwords ran successfully 2019-09-23 13:43:26,976 - stages.py[DEBUG]: Running module grub-dpkg () with frequency once-per-instance 2019-09-23 13:43:26,976 - handlers.py[DEBUG]: start: modules-config/config-grub-dpkg: running config-grub-dpkg with frequency once-per-instance 2019-09-23 13:43:26,977 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_grub_dpkg - wb: [644] 23 bytes 2019-09-23 13:43:26,977 - helpers.py[DEBUG]: Running config-grub-dpkg using lock () 2019-09-23 13:43:26,977 - cc_grub_dpkg.py[DEBUG]: Setting grub debconf-set-selections with '/dev/sda','false' 2019-09-23 13:43:26,977 - util.py[DEBUG]: Running command ['debconf-set-selections'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:29,600 - handlers.py[DEBUG]: finish: modules-config/config-grub-dpkg: SUCCESS: config-grub-dpkg ran successfully 2019-09-23 13:43:29,600 - stages.py[DEBUG]: Running module apt-pipelining () with frequency once-per-instance 2019-09-23 13:43:29,601 - handlers.py[DEBUG]: start: modules-config/config-apt-pipelining: running config-apt-pipelining with frequency once-per-instance 2019-09-23 13:43:29,601 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_apt_pipelining - wb: [644] 24 bytes 2019-09-23 13:43:29,601 - helpers.py[DEBUG]: Running config-apt-pipelining using lock () 2019-09-23 13:43:29,601 - util.py[DEBUG]: Writing to /etc/apt/apt.conf.d/90cloud-init-pipelining - wb: [644] 80 bytes 2019-09-23 13:43:29,602 - cc_apt_pipelining.py[DEBUG]: Wrote /etc/apt/apt.conf.d/90cloud-init-pipelining with apt pipeline depth setting 0 2019-09-23 13:43:29,602 - handlers.py[DEBUG]: finish: modules-config/config-apt-pipelining: SUCCESS: config-apt-pipelining ran successfully 2019-09-23 13:43:29,602 - stages.py[DEBUG]: Running module apt-configure () with frequency once-per-instance 2019-09-23 13:43:29,602 - handlers.py[DEBUG]: start: modules-config/config-apt-configure: running config-apt-configure with frequency once-per-instance 2019-09-23 13:43:29,602 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_apt_configure - wb: [644] 24 bytes 2019-09-23 13:43:29,602 - helpers.py[DEBUG]: Running config-apt-configure using lock () 2019-09-23 13:43:29,603 - cc_apt_configure.py[DEBUG]: debconf_selections was not set in config 2019-09-23 13:43:29,603 - util.py[DEBUG]: Reading from /etc/os-release (quiet=True) 2019-09-23 13:43:29,603 - util.py[DEBUG]: Read 386 bytes from /etc/os-release 2019-09-23 13:43:29,603 - util.py[DEBUG]: Reading from /etc/system-image/channel.ini (quiet=True) 2019-09-23 13:43:29,603 - util.py[DEBUG]: Read 0 bytes from /etc/system-image/channel.ini 2019-09-23 13:43:29,603 - cc_apt_configure.py[DEBUG]: handling apt config: {} 2019-09-23 13:43:29,604 - util.py[DEBUG]: Running command ['lsb_release', '--all'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:29,651 - util.py[DEBUG]: Running command ['dpkg', '--print-architecture'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:29,654 - cc_apt_configure.py[DEBUG]: got primary mirror: None 2019-09-23 13:43:29,655 - cc_apt_configure.py[DEBUG]: got security mirror: None 2019-09-23 13:43:29,655 - util.py[DEBUG]: Running command ['dpkg', '--print-architecture'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:29,658 - __init__.py[DEBUG]: filtered distro mirror info: {'primary': 'http://archive.ubuntu.com/ubuntu', 'security': 'http://security.ubuntu.com/ubuntu'} 2019-09-23 13:43:29,658 - cc_apt_configure.py[DEBUG]: Apt Mirror info: {'primary': 'http://archive.ubuntu.com/ubuntu', 'security': 'http://security.ubuntu.com/ubuntu', 'PRIMARY': 'http://archive.ubuntu.com/ubuntu', 'SECURITY': 'http://security.ubuntu.com/ubuntu', 'MIRROR': 'http://archive.ubuntu.com/ubuntu'} 2019-09-23 13:43:29,658 - cc_apt_configure.py[INFO]: No custom template provided, fall back to builtin 2019-09-23 13:43:29,708 - util.py[DEBUG]: Reading from /etc/cloud/templates/sources.list.ubuntu.tmpl (quiet=False) 2019-09-23 13:43:29,718 - util.py[DEBUG]: Read 2841 bytes from /etc/cloud/templates/sources.list.ubuntu.tmpl 2019-09-23 13:43:29,726 - util.py[DEBUG]: Writing to /etc/apt/sources.list - wb: [644] 3125 bytes 2019-09-23 13:43:29,727 - util.py[DEBUG]: Running command ['dpkg', '--print-architecture'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:43:29,729 - handlers.py[DEBUG]: finish: modules-config/config-apt-configure: SUCCESS: config-apt-configure ran successfully 2019-09-23 13:43:29,729 - stages.py[DEBUG]: Running module ubuntu-advantage () with frequency once-per-instance 2019-09-23 13:43:29,730 - handlers.py[DEBUG]: start: modules-config/config-ubuntu-advantage: running config-ubuntu-advantage with frequency once-per-instance 2019-09-23 13:43:29,730 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_ubuntu_advantage - wb: [644] 24 bytes 2019-09-23 13:43:29,730 - helpers.py[DEBUG]: Running config-ubuntu-advantage using lock () 2019-09-23 13:43:29,730 - cc_ubuntu_advantage.py[DEBUG]: Skipping module named ubuntu-advantage, no 'ubuntu-advantage' key in configuration 2019-09-23 13:43:29,731 - handlers.py[DEBUG]: finish: modules-config/config-ubuntu-advantage: SUCCESS: config-ubuntu-advantage ran successfully 2019-09-23 13:43:29,731 - stages.py[DEBUG]: Running module ntp () with frequency once-per-instance 2019-09-23 13:43:29,731 - handlers.py[DEBUG]: start: modules-config/config-ntp: running config-ntp with frequency once-per-instance 2019-09-23 13:43:29,731 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_ntp - wb: [644] 23 bytes 2019-09-23 13:43:29,731 - helpers.py[DEBUG]: Running config-ntp using lock () 2019-09-23 13:43:29,731 - cc_ntp.py[DEBUG]: Skipping module named ntp, not present or disabled by cfg 2019-09-23 13:43:29,731 - handlers.py[DEBUG]: finish: modules-config/config-ntp: SUCCESS: config-ntp ran successfully 2019-09-23 13:43:29,731 - stages.py[DEBUG]: Running module timezone () with frequency once-per-instance 2019-09-23 13:43:29,732 - handlers.py[DEBUG]: start: modules-config/config-timezone: running config-timezone with frequency once-per-instance 2019-09-23 13:43:29,732 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_timezone - wb: [644] 24 bytes 2019-09-23 13:43:29,732 - helpers.py[DEBUG]: Running config-timezone using lock () 2019-09-23 13:43:29,732 - cc_timezone.py[DEBUG]: Skipping module named timezone, no 'timezone' specified 2019-09-23 13:43:29,732 - handlers.py[DEBUG]: finish: modules-config/config-timezone: SUCCESS: config-timezone ran successfully 2019-09-23 13:43:29,732 - stages.py[DEBUG]: Running module disable-ec2-metadata () with frequency always 2019-09-23 13:43:29,733 - handlers.py[DEBUG]: start: modules-config/config-disable-ec2-metadata: running config-disable-ec2-metadata with frequency always 2019-09-23 13:43:29,733 - helpers.py[DEBUG]: Running config-disable-ec2-metadata using lock () 2019-09-23 13:43:29,733 - cc_disable_ec2_metadata.py[DEBUG]: Skipping module named disable-ec2-metadata, disabling the ec2 route not enabled 2019-09-23 13:43:29,733 - handlers.py[DEBUG]: finish: modules-config/config-disable-ec2-metadata: SUCCESS: config-disable-ec2-metadata ran successfully 2019-09-23 13:43:29,733 - stages.py[DEBUG]: Running module runcmd () with frequency once-per-instance 2019-09-23 13:43:29,733 - handlers.py[DEBUG]: start: modules-config/config-runcmd: running config-runcmd with frequency once-per-instance 2019-09-23 13:43:29,733 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_runcmd - wb: [644] 24 bytes 2019-09-23 13:43:29,734 - helpers.py[DEBUG]: Running config-runcmd using lock () 2019-09-23 13:43:29,738 - util.py[DEBUG]: Shellified 8 commands. 2019-09-23 13:43:29,739 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/scripts/runcmd - wb: [700] 541 bytes 2019-09-23 13:43:29,739 - handlers.py[DEBUG]: finish: modules-config/config-runcmd: SUCCESS: config-runcmd ran successfully 2019-09-23 13:43:29,739 - stages.py[DEBUG]: Running module byobu () with frequency once-per-instance 2019-09-23 13:43:29,739 - handlers.py[DEBUG]: start: modules-config/config-byobu: running config-byobu with frequency once-per-instance 2019-09-23 13:43:29,739 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_byobu - wb: [644] 24 bytes 2019-09-23 13:43:29,740 - helpers.py[DEBUG]: Running config-byobu using lock () 2019-09-23 13:43:29,740 - cc_byobu.py[DEBUG]: Skipping module named byobu, no 'byobu' values found 2019-09-23 13:43:29,740 - handlers.py[DEBUG]: finish: modules-config/config-byobu: SUCCESS: config-byobu ran successfully 2019-09-23 13:43:29,740 - main.py[DEBUG]: Ran 15 modules with 0 failures 2019-09-23 13:43:29,741 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False) 2019-09-23 13:43:29,741 - util.py[DEBUG]: Read 12 bytes from /proc/uptime 2019-09-23 13:43:29,741 - util.py[DEBUG]: cloud-init mode 'modules' took 3.206 seconds (3.21) 2019-09-23 13:43:29,741 - handlers.py[DEBUG]: finish: modules-config: SUCCESS: running modules for config 2019-09-23 13:43:30,157 - util.py[DEBUG]: Cloud-init v. 18.3-52-gc5f78957-1~bddeb~18.04.1 running 'modules:final' at Mon, 23 Sep 2019 13:43:30 +0000. Up 20.06 seconds. 2019-09-23 13:43:30,176 - stages.py[DEBUG]: Using distro class 2019-09-23 13:43:30,177 - stages.py[DEBUG]: Running module snappy () with frequency once-per-instance 2019-09-23 13:43:30,177 - handlers.py[DEBUG]: start: modules-final/config-snappy: running config-snappy with frequency once-per-instance 2019-09-23 13:43:30,177 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_snappy - wb: [644] 25 bytes 2019-09-23 13:43:30,177 - helpers.py[DEBUG]: Running config-snappy using lock () 2019-09-23 13:43:30,178 - util.py[DEBUG]: Reading from /etc/os-release (quiet=True) 2019-09-23 13:43:30,178 - util.py[DEBUG]: Read 386 bytes from /etc/os-release 2019-09-23 13:43:30,178 - util.py[DEBUG]: Reading from /etc/system-image/channel.ini (quiet=True) 2019-09-23 13:43:30,178 - util.py[DEBUG]: Read 0 bytes from /etc/system-image/channel.ini 2019-09-23 13:43:30,178 - cc_snappy.py[DEBUG]: snappy: 'auto' mode, and system not snappy 2019-09-23 13:43:30,178 - handlers.py[DEBUG]: finish: modules-final/config-snappy: SUCCESS: config-snappy ran successfully 2019-09-23 13:43:30,178 - stages.py[DEBUG]: Running module package-update-upgrade-install () with frequency once-per-instance 2019-09-23 13:43:30,179 - handlers.py[DEBUG]: start: modules-final/config-package-update-upgrade-install: running config-package-update-upgrade-install with frequency once-per-instance 2019-09-23 13:43:30,179 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_package_update_upgrade_install - wb: [644] 25 bytes 2019-09-23 13:43:30,179 - helpers.py[DEBUG]: Running config-package-update-upgrade-install using lock () 2019-09-23 13:43:30,179 - handlers.py[DEBUG]: finish: modules-final/config-package-update-upgrade-install: SUCCESS: config-package-update-upgrade-install ran successfully 2019-09-23 13:43:30,179 - stages.py[DEBUG]: Running module fan () with frequency once-per-instance 2019-09-23 13:43:30,180 - handlers.py[DEBUG]: start: modules-final/config-fan: running config-fan with frequency once-per-instance 2019-09-23 13:43:30,180 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_fan - wb: [644] 24 bytes 2019-09-23 13:43:30,180 - helpers.py[DEBUG]: Running config-fan using lock () 2019-09-23 13:43:30,180 - cc_fan.py[DEBUG]: fan: no 'fan' config entry. disabling 2019-09-23 13:43:30,180 - handlers.py[DEBUG]: finish: modules-final/config-fan: SUCCESS: config-fan ran successfully 2019-09-23 13:43:30,180 - stages.py[DEBUG]: Running module landscape () with frequency once-per-instance 2019-09-23 13:43:30,180 - handlers.py[DEBUG]: start: modules-final/config-landscape: running config-landscape with frequency once-per-instance 2019-09-23 13:43:30,181 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_landscape - wb: [644] 25 bytes 2019-09-23 13:43:30,181 - helpers.py[DEBUG]: Running config-landscape using lock () 2019-09-23 13:43:30,181 - handlers.py[DEBUG]: finish: modules-final/config-landscape: SUCCESS: config-landscape ran successfully 2019-09-23 13:43:30,181 - stages.py[DEBUG]: Running module lxd () with frequency once-per-instance 2019-09-23 13:43:30,181 - handlers.py[DEBUG]: start: modules-final/config-lxd: running config-lxd with frequency once-per-instance 2019-09-23 13:43:30,181 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_lxd - wb: [644] 24 bytes 2019-09-23 13:43:30,182 - helpers.py[DEBUG]: Running config-lxd using lock () 2019-09-23 13:43:30,182 - cc_lxd.py[DEBUG]: Skipping module named lxd, not present or disabled by cfg 2019-09-23 13:43:30,182 - handlers.py[DEBUG]: finish: modules-final/config-lxd: SUCCESS: config-lxd ran successfully 2019-09-23 13:43:30,182 - stages.py[DEBUG]: Running module puppet () with frequency once-per-instance 2019-09-23 13:43:30,182 - handlers.py[DEBUG]: start: modules-final/config-puppet: running config-puppet with frequency once-per-instance 2019-09-23 13:43:30,182 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_puppet - wb: [644] 24 bytes 2019-09-23 13:43:30,183 - helpers.py[DEBUG]: Running config-puppet using lock () 2019-09-23 13:43:30,183 - cc_puppet.py[DEBUG]: Skipping module named puppet, no 'puppet' configuration found 2019-09-23 13:43:30,183 - handlers.py[DEBUG]: finish: modules-final/config-puppet: SUCCESS: config-puppet ran successfully 2019-09-23 13:43:30,183 - stages.py[DEBUG]: Running module chef () with frequency once-per-instance 2019-09-23 13:43:30,183 - handlers.py[DEBUG]: start: modules-final/config-chef: running config-chef with frequency once-per-instance 2019-09-23 13:43:30,183 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_chef - wb: [644] 25 bytes 2019-09-23 13:43:30,184 - helpers.py[DEBUG]: Running config-chef using lock () 2019-09-23 13:43:30,184 - cc_chef.py[DEBUG]: Skipping module named chef, no 'chef' key in configuration 2019-09-23 13:43:30,184 - handlers.py[DEBUG]: finish: modules-final/config-chef: SUCCESS: config-chef ran successfully 2019-09-23 13:43:30,184 - stages.py[DEBUG]: Running module mcollective () with frequency once-per-instance 2019-09-23 13:43:30,184 - handlers.py[DEBUG]: start: modules-final/config-mcollective: running config-mcollective with frequency once-per-instance 2019-09-23 13:43:30,184 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_mcollective - wb: [644] 25 bytes 2019-09-23 13:43:30,185 - helpers.py[DEBUG]: Running config-mcollective using lock () 2019-09-23 13:43:30,185 - cc_mcollective.py[DEBUG]: Skipping module named mcollective, no 'mcollective' key in configuration 2019-09-23 13:43:30,185 - handlers.py[DEBUG]: finish: modules-final/config-mcollective: SUCCESS: config-mcollective ran successfully 2019-09-23 13:43:30,185 - stages.py[DEBUG]: Running module salt-minion () with frequency once-per-instance 2019-09-23 13:43:30,185 - handlers.py[DEBUG]: start: modules-final/config-salt-minion: running config-salt-minion with frequency once-per-instance 2019-09-23 13:43:30,185 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_salt_minion - wb: [644] 24 bytes 2019-09-23 13:43:30,185 - helpers.py[DEBUG]: Running config-salt-minion using lock () 2019-09-23 13:43:30,185 - cc_salt_minion.py[DEBUG]: Skipping module named salt-minion, no 'salt_minion' key in configuration 2019-09-23 13:43:30,186 - handlers.py[DEBUG]: finish: modules-final/config-salt-minion: SUCCESS: config-salt-minion ran successfully 2019-09-23 13:43:30,186 - stages.py[DEBUG]: Running module rightscale_userdata () with frequency once-per-instance 2019-09-23 13:43:30,186 - handlers.py[DEBUG]: start: modules-final/config-rightscale_userdata: running config-rightscale_userdata with frequency once-per-instance 2019-09-23 13:43:30,186 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_rightscale_userdata - wb: [644] 25 bytes 2019-09-23 13:43:30,186 - helpers.py[DEBUG]: Running config-rightscale_userdata using lock () 2019-09-23 13:43:30,186 - cc_rightscale_userdata.py[DEBUG]: Failed to get raw userdata in module rightscale_userdata 2019-09-23 13:43:30,186 - handlers.py[DEBUG]: finish: modules-final/config-rightscale_userdata: SUCCESS: config-rightscale_userdata ran successfully 2019-09-23 13:43:30,187 - stages.py[DEBUG]: Running module scripts-vendor () with frequency once-per-instance 2019-09-23 13:43:30,187 - handlers.py[DEBUG]: start: modules-final/config-scripts-vendor: running config-scripts-vendor with frequency once-per-instance 2019-09-23 13:43:30,187 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_scripts_vendor - wb: [644] 25 bytes 2019-09-23 13:43:30,187 - helpers.py[DEBUG]: Running config-scripts-vendor using lock () 2019-09-23 13:43:30,187 - handlers.py[DEBUG]: finish: modules-final/config-scripts-vendor: SUCCESS: config-scripts-vendor ran successfully 2019-09-23 13:43:30,187 - stages.py[DEBUG]: Running module scripts-per-once () with frequency once 2019-09-23 13:43:30,188 - handlers.py[DEBUG]: start: modules-final/config-scripts-per-once: running config-scripts-per-once with frequency once 2019-09-23 13:43:30,188 - util.py[DEBUG]: Writing to /var/lib/cloud/sem/config_scripts_per_once.once - wb: [644] 25 bytes 2019-09-23 13:43:30,188 - helpers.py[DEBUG]: Running config-scripts-per-once using lock () 2019-09-23 13:43:30,188 - handlers.py[DEBUG]: finish: modules-final/config-scripts-per-once: SUCCESS: config-scripts-per-once ran successfully 2019-09-23 13:43:30,188 - stages.py[DEBUG]: Running module scripts-per-boot () with frequency always 2019-09-23 13:43:30,189 - handlers.py[DEBUG]: start: modules-final/config-scripts-per-boot: running config-scripts-per-boot with frequency always 2019-09-23 13:43:30,189 - helpers.py[DEBUG]: Running config-scripts-per-boot using lock () 2019-09-23 13:43:30,189 - handlers.py[DEBUG]: finish: modules-final/config-scripts-per-boot: SUCCESS: config-scripts-per-boot ran successfully 2019-09-23 13:43:30,189 - stages.py[DEBUG]: Running module scripts-per-instance () with frequency once-per-instance 2019-09-23 13:43:30,189 - handlers.py[DEBUG]: start: modules-final/config-scripts-per-instance: running config-scripts-per-instance with frequency once-per-instance 2019-09-23 13:43:30,189 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_scripts_per_instance - wb: [644] 25 bytes 2019-09-23 13:43:30,189 - helpers.py[DEBUG]: Running config-scripts-per-instance using lock () 2019-09-23 13:43:30,190 - handlers.py[DEBUG]: finish: modules-final/config-scripts-per-instance: SUCCESS: config-scripts-per-instance ran successfully 2019-09-23 13:43:30,190 - stages.py[DEBUG]: Running module scripts-user () with frequency once-per-instance 2019-09-23 13:43:30,190 - handlers.py[DEBUG]: start: modules-final/config-scripts-user: running config-scripts-user with frequency once-per-instance 2019-09-23 13:43:30,190 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_scripts_user - wb: [644] 25 bytes 2019-09-23 13:43:30,190 - helpers.py[DEBUG]: Running config-scripts-user using lock () 2019-09-23 13:43:30,190 - util.py[DEBUG]: Running command ['/var/lib/cloud/instance/scripts/runcmd'] with allowed return codes [0] (shell=False, capture=False) 2019-09-23 13:44:17,682 - handlers.py[DEBUG]: finish: modules-final/config-scripts-user: SUCCESS: config-scripts-user ran successfully 2019-09-23 13:44:17,683 - stages.py[DEBUG]: Running module kubeadm () with frequency once-per-instance 2019-09-23 13:44:17,683 - handlers.py[DEBUG]: start: modules-final/config-kubeadm: running config-kubeadm with frequency once-per-instance 2019-09-23 13:44:17,683 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_kubeadm - wb: [644] 24 bytes 2019-09-23 13:44:17,684 - helpers.py[DEBUG]: Running config-kubeadm using lock () 2019-09-23 13:44:17,684 - cc_kubeadm.py[DEBUG]: Skipping 'kubeadm' module, no config 2019-09-23 13:44:17,684 - handlers.py[DEBUG]: finish: modules-final/config-kubeadm: SUCCESS: config-kubeadm ran successfully 2019-09-23 13:44:17,684 - stages.py[DEBUG]: Running module ssh-authkey-fingerprints () with frequency once-per-instance 2019-09-23 13:44:17,684 - handlers.py[DEBUG]: start: modules-final/config-ssh-authkey-fingerprints: running config-ssh-authkey-fingerprints with frequency once-per-instance 2019-09-23 13:44:17,684 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_ssh_authkey_fingerprints - wb: [644] 25 bytes 2019-09-23 13:44:17,685 - helpers.py[DEBUG]: Running config-ssh-authkey-fingerprints using lock () 2019-09-23 13:44:17,685 - util.py[DEBUG]: Reading from /etc/ssh/sshd_config (quiet=False) 2019-09-23 13:44:17,685 - util.py[DEBUG]: Read 3264 bytes from /etc/ssh/sshd_config 2019-09-23 13:44:17,686 - util.py[DEBUG]: Reading from /home/capv/.ssh/authorized_keys (quiet=False) 2019-09-23 13:44:17,686 - util.py[DEBUG]: Read 463 bytes from /home/capv/.ssh/authorized_keys 2019-09-23 13:44:17,687 - util.py[DEBUG]: Reading from /etc/ssh/sshd_config (quiet=False) 2019-09-23 13:44:17,688 - util.py[DEBUG]: Read 3264 bytes from /etc/ssh/sshd_config 2019-09-23 13:44:17,688 - util.py[DEBUG]: Reading from /home/maxr/.ssh/authorized_keys (quiet=False) 2019-09-23 13:44:17,688 - util.py[DEBUG]: Read 392 bytes from /home/maxr/.ssh/authorized_keys 2019-09-23 13:44:17,689 - handlers.py[DEBUG]: finish: modules-final/config-ssh-authkey-fingerprints: SUCCESS: config-ssh-authkey-fingerprints ran successfully 2019-09-23 13:44:17,689 - stages.py[DEBUG]: Running module keys-to-console () with frequency once-per-instance 2019-09-23 13:44:17,689 - handlers.py[DEBUG]: start: modules-final/config-keys-to-console: running config-keys-to-console with frequency once-per-instance 2019-09-23 13:44:17,689 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_keys_to_console - wb: [644] 23 bytes 2019-09-23 13:44:17,690 - helpers.py[DEBUG]: Running config-keys-to-console using lock () 2019-09-23 13:44:17,690 - util.py[DEBUG]: Running command ['/usr/lib/cloud-init/write-ssh-key-fingerprints', '', 'ssh-dss'] with allowed return codes [0] (shell=False, capture=True) 2019-09-23 13:44:17,722 - handlers.py[DEBUG]: finish: modules-final/config-keys-to-console: SUCCESS: config-keys-to-console ran successfully 2019-09-23 13:44:17,723 - stages.py[DEBUG]: Running module phone-home () with frequency once-per-instance 2019-09-23 13:44:17,723 - handlers.py[DEBUG]: start: modules-final/config-phone-home: running config-phone-home with frequency once-per-instance 2019-09-23 13:44:17,723 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_phone_home - wb: [644] 22 bytes 2019-09-23 13:44:17,724 - helpers.py[DEBUG]: Running config-phone-home using lock () 2019-09-23 13:44:17,724 - cc_phone_home.py[DEBUG]: Skipping module named phone-home, no 'phone_home' configuration found 2019-09-23 13:44:17,724 - handlers.py[DEBUG]: finish: modules-final/config-phone-home: SUCCESS: config-phone-home ran successfully 2019-09-23 13:44:17,724 - stages.py[DEBUG]: Running module final-message () with frequency always 2019-09-23 13:44:17,724 - handlers.py[DEBUG]: start: modules-final/config-final-message: running config-final-message with frequency always 2019-09-23 13:44:17,724 - helpers.py[DEBUG]: Running config-final-message using lock () 2019-09-23 13:44:17,724 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False) 2019-09-23 13:44:17,724 - util.py[DEBUG]: Read 13 bytes from /proc/uptime 2019-09-23 13:44:17,728 - util.py[DEBUG]: Cloud-init v. 18.3-52-gc5f78957-1~bddeb~18.04.1 finished at Mon, 23 Sep 2019 13:44:17 +0000. Datasource DataSourceVMwareGuestInfo. Up 67.73 seconds 2019-09-23 13:44:17,729 - util.py[DEBUG]: Writing to /var/lib/cloud/instance/boot-finished - wb: [644] 79 bytes 2019-09-23 13:44:17,731 - handlers.py[DEBUG]: finish: modules-final/config-final-message: SUCCESS: config-final-message ran successfully 2019-09-23 13:44:17,731 - stages.py[DEBUG]: Running module power-state-change () with frequency once-per-instance 2019-09-23 13:44:17,732 - handlers.py[DEBUG]: start: modules-final/config-power-state-change: running config-power-state-change with frequency once-per-instance 2019-09-23 13:44:17,732 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/management-cluster-controlplane-0/sem/config_power_state_change - wb: [644] 25 bytes 2019-09-23 13:44:17,732 - helpers.py[DEBUG]: Running config-power-state-change using lock () 2019-09-23 13:44:17,732 - cc_power_state_change.py[DEBUG]: no power_state provided. doing nothing 2019-09-23 13:44:17,732 - handlers.py[DEBUG]: finish: modules-final/config-power-state-change: SUCCESS: config-power-state-change ran successfully 2019-09-23 13:44:17,732 - main.py[DEBUG]: Ran 21 modules with 0 failures 2019-09-23 13:44:17,741 - util.py[DEBUG]: Creating symbolic link from '/run/cloud-init/result.json' => '../../var/lib/cloud/data/result.json' 2019-09-23 13:44:17,741 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False) 2019-09-23 13:44:17,741 - util.py[DEBUG]: Read 13 bytes from /proc/uptime 2019-09-23 13:44:17,741 - util.py[DEBUG]: cloud-init mode 'modules' took 47.694 seconds (47.69) 2019-09-23 13:44:17,741 - handlers.py[DEBUG]: finish: modules-final: SUCCESS: running modules for final ``` ```text cat /var/log/cloud-init-output.log Cloud-init v. 18.3-52-gc5f78957-1~bddeb~18.04.1 running 'init-local' at Mon, 23 Sep 2019 13:43:17 +0000. Up 7.43 seconds. Cloud-init v. 18.3-52-gc5f78957-1~bddeb~18.04.1 running 'init' at Mon, 23 Sep 2019 13:43:23 +0000. Up 13.81 seconds. ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+ ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+ ci-info: | ens192 | True | 10.92.159.240 | 255.255.252.0 | global | 00:50:56:97:70:bb | ci-info: | ens192 | True | fe80::250:56ff:fe97:70bb/64 | . | link | 00:50:56:97:70:bb | ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | ci-info: | lo | True | ::1/128 | . | host | . | ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+ ci-info: +++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++ ci-info: +-------+-------------+-------------+---------------+-----------+-------+ ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | ci-info: +-------+-------------+-------------+---------------+-----------+-------+ ci-info: | 0 | 0.0.0.0 | 10.92.156.1 | 0.0.0.0 | ens192 | UG | ci-info: | 1 | 10.92.156.0 | 0.0.0.0 | 255.255.252.0 | ens192 | U | ci-info: +-------+-------------+-------------+---------------+-----------+-------+ ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ ci-info: +-------+-------------+---------+-----------+-------+ ci-info: | Route | Destination | Gateway | Interface | Flags | ci-info: +-------+-------------+---------+-----------+-------+ ci-info: | 1 | fe80::/64 | :: | ens192 | U | ci-info: | 3 | local | :: | ens192 | U | ci-info: | 4 | ff00::/8 | :: | ens192 | U | ci-info: +-------+-------------+---------+-----------+-------+ Generating public/private rsa key pair. Your identification has been saved in /etc/ssh/ssh_host_rsa_key. Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub. The key fingerprint is: SHA256:0BnOzY6Hbb8penhgiLZEcN6XUS+7vpHDy29XIhDkVJs root@management-cluster-controlplane-0 The key's randomart image is: +---[RSA 2048]----+ | .o+.. | | . . +.B.. o | | + .. =o=.E | | o ..o=.o | | . . oS *. | | + . o+ +. . .| | o . . o* .. o | | . .o++.o. | | .+==+. | +----[SHA256]-----+ Generating public/private dsa key pair. Your identification has been saved in /etc/ssh/ssh_host_dsa_key. Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub. The key fingerprint is: SHA256:7MIBgz89HTuie1nRBDHfCc7eqMDPrE2FfXY3o9Xx3Rs root@management-cluster-controlplane-0 The key's randomart image is: +---[DSA 1024]----+ | +o. | | . =.o . | | . o .o+ o . | | . = o.*.o *| | o B S.= + .E*| | + X.+ o .o.=| | . ooB . . | | .o= | | ... . | +----[SHA256]-----+ Generating public/private ecdsa key pair. Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key. Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub. The key fingerprint is: SHA256:94J75yU9iIUEbzvqiqFkxIqEK2YLzvPUAxd/xlBXQi0 root@management-cluster-controlplane-0 The key's randomart image is: +---[ECDSA 256]---+ | ...+o. | | .o.E.. | | . . + . | |.. o oo o | |..o. . .S+= . | |o+ + o+ = o | |*oo..o o o + + | |B+o. o.. ....o . | | ++.. ..o. o. | +----[SHA256]-----+ Generating public/private ed25519 key pair. Your identification has been saved in /etc/ssh/ssh_host_ed25519_key. Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub. The key fingerprint is: SHA256:66kZqO9MiVE1a+uNIErxc3h9OWIXe66x86olcxVi89E root@management-cluster-controlplane-0 The key's randomart image is: +--[ED25519 256]--+ | o | | . o . | | . . o = o E | | o...... B o | | .o+.o.+S* + | |.. ++=.o+.= | |. . + +oo+ . | | + +=o+ | | .o+ oo+++. | +----[SHA256]-----+ Cloud-init v. 18.3-52-gc5f78957-1~bddeb~18.04.1 running 'modules:config' at Mon, 23 Sep 2019 13:43:26 +0000. Up 16.54 seconds. [init] Using Kubernetes version: v1.15.3 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Using existing front-proxy-ca certificate authority [certs] Generating "front-proxy-client" certificate and key [certs] Using existing etcd/ca certificate authority [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [management-cluster-controlplane-0 localhost] and IPs [10.92.159.240 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [management-cluster-controlplane-0 localhost] and IPs [10.92.159.240 127.0.0.1 ::1] [certs] Generating "apiserver-etcd-client" certificate and key [certs] Using existing ca certificate authority [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [management-cluster-controlplane-0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [100.64.0.1 10.92.159.240] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [controlplane] Adding extra host path mount "cloud-config" to "kube-apiserver" [controlplane] Adding extra host path mount "cloud-config" to "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-controller-manager" [controlplane] Adding extra host path mount "cloud-config" to "kube-apiserver" [controlplane] Adding extra host path mount "cloud-config" to "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [controlplane] Adding extra host path mount "cloud-config" to "kube-apiserver" [controlplane] Adding extra host path mount "cloud-config" to "kube-controller-manager" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 41.504119 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node management-cluster-controlplane-0 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node management-cluster-controlplane-0 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: lo6953.lgpkis72ycs5dbst [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.92.159.240:6443 --token lo6953.lgpkis72ycs5dbst \ --discovery-token-ca-cert-hash sha256:19802b639d036c58174549dd4d7f23b97293c9e7622fa97b25e20d53e1f9233f Cloud-init v. 18.3-52-gc5f78957-1~bddeb~18.04.1 running 'modules:final' at Mon, 23 Sep 2019 13:43:30 +0000. Up 20.06 seconds. Cloud-init v. 18.3-52-gc5f78957-1~bddeb~18.04.1 finished at Mon, 23 Sep 2019 13:44:17 +0000. Datasource DataSourceVMwareGuestInfo. Up 67.73 seconds ``` I've also extracted all logfiles and k8s config on the node and attached it [debugdata.zip](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/files/3642848/debugdata.zip)

Anything else you would like to add:

Environment:

andrewsykim commented 5 years ago

/assign

andrewsykim commented 5 years ago

@MaxRink so clusterctl assumes it can reach the provisioned clusters remotely via the apiserver port (6443). Maybe something in your setup that is locking down access to that port?

MaxRink commented 5 years ago

This is what i initially assumed, too. But if i shut down the kube-apiserver (which is listening on port 6443) i can just netcat between my boostrapping server/kind running on the server and the created node. So port 6443 is not restricted. Im running on an airgapped infrastructure with an http_proxy as the only way out, so maybe i misconfigured something there.

Edit: was my mishap, bodged the noporxy hosts given to the kind container

andrewsykim commented 5 years ago

Thanks @MaxRink!

/close

k8s-ci-robot commented 5 years ago

@andrewsykim: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/issues/584#issuecomment-537083358): >Thanks @MaxRink! > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.