coreos / tectonic-forum

Apache License 2.0
30 stars 9 forks source link

Tectonic Console issues #91

Closed xshro closed 7 years ago

xshro commented 7 years ago

Issue Report Template

Tectonic Version

tectonic-1.5.5-tectonic.2

Environment

bate-metal / vmware

Expected Behavior

Awaiting Tectonic Console

Actual Behavior

tectonic-console wont start , all the rest components / containers are working

Reproduction Steps

  1. ssh core@bv-vm-00 'sudo systemctl start bootkube' - Tectonic Console wont come online

  2. debug by ssh core@bv-vm-00 'journalctl -u bootkube -fex'

logs looks fine

Mar 27 15:34:10 bv-vm-00 bootkube-start[3617]: pubkey: prefix: "quay.io/coreos/bootkube" Mar 27 15:34:10 bv-vm-00 bootkube-start[3617]: key: "https://quay.io/aci-signing-key" Mar 27 15:34:10 bv-vm-00 bootkube-start[3617]: gpg key fingerprint is: BFF3 13CD AA56 0B16 A898 7B8F 72AB F5F6 799D 33BC Mar 27 15:34:10 bv-vm-00 bootkube-start[3617]: Quay.io ACI Converter (ACI conversion signing key) <support@quay.io> Mar 27 15:34:10 bv-vm-00 bootkube-start[3617]: Trusting "https://quay.io/aci-signing-key" for prefix "quay.io/coreos/bootkube" without fingerprint review. Mar 27 15:34:10 bv-vm-00 bootkube-start[3617]: Added key for prefix "quay.io/coreos/bootkube" at "/etc/rkt/trustedkeys/prefix.d/quay.io/coreos/bootkube/bff313cdaa560b16a8987b8f72abf5f6799d33bc" Mar 27 15:34:10 bv-vm-00 bootkube-start[3617]: Downloading signature: 0 B/473 B Mar 27 15:34:10 bv-vm-00 bootkube-start[3617]: Downloading signature: 473 B/473 B Mar 27 15:34:10 bv-vm-00 bootkube-start[3617]: Downloading signature: 473 B/473 B Mar 27 15:34:11 bv-vm-00 bootkube-start[3617]: Downloading ACI: 0 B/32.8 MB Mar 27 15:34:11 bv-vm-00 bootkube-start[3617]: Downloading ACI: 16.3 KB/32.8 MB Mar 27 15:34:12 bv-vm-00 bootkube-start[3617]: Downloading ACI: 1.25 MB/32.8 MB Mar 27 15:34:14 bv-vm-00 bootkube-start[3617]: Downloading ACI: 3.62 MB/32.8 MB Mar 27 15:34:15 bv-vm-00 bootkube-start[3617]: Downloading ACI: 5.27 MB/32.8 MB Mar 27 15:34:16 bv-vm-00 bootkube-start[3617]: Downloading ACI: 7.29 MB/32.8 MB Mar 27 15:34:17 bv-vm-00 bootkube-start[3617]: Downloading ACI: 9.19 MB/32.8 MB Mar 27 15:34:18 bv-vm-00 bootkube-start[3617]: Downloading ACI: 10.9 MB/32.8 MB Mar 27 15:34:19 bv-vm-00 bootkube-start[3617]: Downloading ACI: 12.7 MB/32.8 MB Mar 27 15:34:20 bv-vm-00 bootkube-start[3617]: Downloading ACI: 14.6 MB/32.8 MB Mar 27 15:34:21 bv-vm-00 bootkube-start[3617]: Downloading ACI: 16.3 MB/32.8 MB Mar 27 15:34:22 bv-vm-00 bootkube-start[3617]: Downloading ACI: 18.1 MB/32.8 MB Mar 27 15:34:23 bv-vm-00 bootkube-start[3617]: Downloading ACI: 19.9 MB/32.8 MB Mar 27 15:34:24 bv-vm-00 bootkube-start[3617]: Downloading ACI: 21.7 MB/32.8 MB Mar 27 15:34:25 bv-vm-00 bootkube-start[3617]: Downloading ACI: 23.5 MB/32.8 MB Mar 27 15:34:26 bv-vm-00 bootkube-start[3617]: Downloading ACI: 25.4 MB/32.8 MB Mar 27 15:34:27 bv-vm-00 bootkube-start[3617]: Downloading ACI: 27.3 MB/32.8 MB Mar 27 15:34:28 bv-vm-00 bootkube-start[3617]: Downloading ACI: 30.5 MB/32.8 MB Mar 27 15:34:28 bv-vm-00 bootkube-start[3617]: Downloading ACI: 32.8 MB/32.8 MB Mar 27 15:34:29 bv-vm-00 bootkube-start[3617]: image: signature verified: Mar 27 15:34:29 bv-vm-00 bootkube-start[3617]: Quay.io ACI Converter (ACI conversion signing key) <support@quay.io> Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.980515] bootkube[5]: Running temporary bootstrap control plane... Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.981348] bootkube[5]: E0327 15:34:34.484214 5 server.go:78] unable to register configz: register config "componentconfig" twice Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.981583] bootkube[5]: Waiting for api-server... Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.982830] bootkube[5]: E0327 15:34:34.485717 5 leaderelection.go:228] error retrieving resource lock kube-system/kube-controller-manager: Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.983451] bootkube[5]: I0327 15:34:34.486368 5 config.go:527] Will report 172.17.0.62 as public IP address. Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.984522] bootkube[5]: E0327 15:34:34.487435 5 reflector.go:199] github.com/kubernetes-incubator/bootkube/pkg/bootkube/status.go:79: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.984985] bootkube[5]: E0327 15:34:34.487900 5 reflector.go:199] github.com/kubernetes-incubator/bootkube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:457: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.985191] bootkube[5]: E0327 15:34:34.487990 5 leaderelection.go:228] error retrieving resource lock kube-system/kube-scheduler: Get http://127.0.0.1:8080/api/v1/namespaces/kube-system/endpoints/kube-scheduler: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.985407] bootkube[5]: E0327 15:34:34.488097 5 reflector.go:199] github.com/kubernetes-incubator/bootkube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:460: Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%21%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.985590] bootkube[5]: E0327 15:34:34.488440 5 reflector.go:199] github.com/kubernetes-incubator/bootkube/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:103: Failed to list *api.ServiceAccount: Get http://127.0.0.1:8080/api/v1/serviceaccounts?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.985773] bootkube[5]: E0327 15:34:34.488443 5 reflector.go:199] github.com/kubernetes-incubator/bootkube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:473: Failed to list *api.ReplicationController: Get http://127.0.0.1:8080/api/v1/replicationcontrollers?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.985946] bootkube[5]: E0327 15:34:34.488515 5 reflector.go:199] github.com/kubernetes-incubator/bootkube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481: Failed to list *extensions.ReplicaSet: Get http://127.0.0.1:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.986135] bootkube[5]: E0327 15:34:34.488548 5 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.PersistentVolumeClaim: Get http://127.0.0.1:8080/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.986354] bootkube[5]: E0327 15:34:34.488709 5 reflector.go:199] github.com/kubernetes-incubator/bootkube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:463: Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.986583] bootkube[5]: E0327 15:34:34.488782 5 reflector.go:199] github.com/kubernetes-incubator/bootkube/vendor/k8s.io/kubernetes/plugin/pkg/admission/serviceaccount/admission.go:119: Failed to list *api.Secret: Get http://127.0.0.1:8080/api/v1/secrets?fieldSelector=type%3Dkubernetes.io%2Fservice-account-token&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.986783] bootkube[5]: E0327 15:34:34.488848 5 reflector.go:199] github.com/kubernetes-incubator/bootkube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466: Failed to list *api.PersistentVolume: Get http://127.0.0.1:8080/api/v1/persistentvolumes?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 340.986956] bootkube[5]: E0327 15:34:34.489022 5 reflector.go:199] github.com/kubernetes-incubator/bootkube/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:470: Failed to list *api.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.034663] bootkube[5]: E0327 15:34:34.537550 5 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.Namespace: Get http://127.0.0.1:8080/api/v1/namespaces?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.034956] bootkube[5]: E0327 15:34:34.537550 5 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *rbac.ClusterRoleBinding: Get http://127.0.0.1:8080/apis/rbac.authorization.k8s.io/v1alpha1/clusterrolebindings?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.035167] bootkube[5]: E0327 15:34:34.537619 5 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *rbac.RoleBinding: Get http://127.0.0.1:8080/apis/rbac.authorization.k8s.io/v1alpha1/rolebindings?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.035348] bootkube[5]: E0327 15:34:34.537619 5 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *rbac.Role: Get http://127.0.0.1:8080/apis/rbac.authorization.k8s.io/v1alpha1/roles?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.035526] bootkube[5]: E0327 15:34:34.537877 5 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *rbac.ClusterRole: Get http://127.0.0.1:8080/apis/rbac.authorization.k8s.io/v1alpha1/clusterroles?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.069555] bootkube[5]: [restful] 2017/03/27 15:34:34 log.go:30: [restful/swagger] listing is available at https://172.17.0.62:443/swaggerapi/ Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.069954] bootkube[5]: [restful] 2017/03/27 15:34:34 log.go:30: [restful/swagger] https://172.17.0.62:443/swaggerui/ is mapped to folder /swagger-ui/ Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.131364] bootkube[5]: I0327 15:34:34.634193 5 serve.go:88] Serving securely on 0.0.0.0:443 Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.131761] bootkube[5]: I0327 15:34:34.634323 5 serve.go:102] Serving insecurely on 127.0.0.1:8080 Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.134849] bootkube[5]: I0327 15:34:34.637757 5 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/cluster-admin Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.136164] bootkube[5]: I0327 15:34:34.639081 5 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:discovery Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.137220] bootkube[5]: I0327 15:34:34.640144 5 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:basic-user Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.138684] bootkube[5]: I0327 15:34:34.641604 5 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/admin Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.140019] bootkube[5]: I0327 15:34:34.642938 5 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/edit Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.141404] bootkube[5]: I0327 15:34:34.644329 5 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/view Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.142726] bootkube[5]: I0327 15:34:34.645651 5 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:node Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.143867] bootkube[5]: I0327 15:34:34.646791 5 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:node-proxier Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.144991] bootkube[5]: I0327 15:34:34.647915 5 storage_rbac.go:131] Created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.147066] bootkube[5]: I0327 15:34:34.649991 5 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.153408] bootkube[5]: I0327 15:34:34.656322 5 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:discovery Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.154559] bootkube[5]: I0327 15:34:34.657474 5 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.155555] bootkube[5]: I0327 15:34:34.658467 5 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:node Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.156664] bootkube[5]: I0327 15:34:34.659576 5 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier Mar 27 15:34:34 bv-vm-00 bootkube-start[3617]: [ 341.157722] bootkube[5]: I0327 15:34:34.660636 5 storage_rbac.go:151] Created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller Mar 27 15:34:35 bv-vm-00 bootkube-start[3617]: [ 342.044280] bootkube[5]: I0327 15:34:35.547183 5 trace.go:61] Trace "Create /api/v1/namespaces/default/services" (started 2017-03-27 15:34:34.639707118 +0000 UTC): Mar 27 15:34:35 bv-vm-00 bootkube-start[3617]: [ 342.044720] bootkube[5]: [9.459µs] [9.459µs] About to convert to expected version Mar 27 15:34:35 bv-vm-00 bootkube-start[3617]: [ 342.044988] bootkube[5]: [63.239µs] [53.78µs] Conversion done Mar 27 15:34:35 bv-vm-00 bootkube-start[3617]: [ 342.045259] bootkube[5]: [901.808337ms] [901.745098ms] About to store object in database Mar 27 15:34:35 bv-vm-00 bootkube-start[3617]: [ 342.045523] bootkube[5]: [904.39688ms] [2.588543ms] Object stored in database Mar 27 15:34:35 bv-vm-00 bootkube-start[3617]: [ 342.045779] bootkube[5]: [904.400299ms] [3.419µs] Self-link added Mar 27 15:34:35 bv-vm-00 bootkube-start[3617]: [ 342.046111] bootkube[5]: [904.458083ms] [57.784µs] END Mar 27 15:34:37 bv-vm-00 bootkube-start[3617]: [ 344.436754] bootkube[5]: I0327 15:34:37.939651 5 leaderelection.go:188] sucessfully acquired lease kube-system/kube-controller-manager Mar 27 15:34:37 bv-vm-00 bootkube-start[3617]: [ 344.437202] bootkube[5]: I0327 15:34:37.939701 5 event.go:217] Event(api.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"e31e9e4c-1302-11e7-ada4-000c29932415", APIVersion:"v1", ResourceVersion:"24", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rkt-2fbe69d7-891c-45f5-b2a5-d3efa9b69894 became leader Mar 27 15:34:37 bv-vm-00 bootkube-start[3617]: [ 344.438538] bootkube[5]: I0327 15:34:37.941420 5 plugins.go:94] No cloud provider specified. Mar 27 15:34:37 bv-vm-00 bootkube-start[3617]: [ 344.438782] bootkube[5]: W0327 15:34:37.941460 5 controllermanager.go:289] Unsuccessful parsing of service CIDR : invalid CIDR address: Mar 27 15:34:37 bv-vm-00 bootkube-start[3617]: [ 344.438978] bootkube[5]: I0327 15:34:37.941568 5 nodecontroller.go:189] Sending events to api server. Mar 27 15:34:37 bv-vm-00 bootkube-start[3617]: [ 344.439176] bootkube[5]: I0327 15:34:37.941684 5 replication_controller.go:219] Starting RC Manager Mar 27 15:34:38 bv-vm-00 bootkube-start[3617]: [ 345.245371] bootkube[5]: I0327 15:34:38.748280 5 leaderelection.go:188] sucessfully acquired lease kube-system/kube-scheduler Mar 27 15:34:38 bv-vm-00 bootkube-start[3617]: [ 345.245711] bootkube[5]: I0327 15:34:38.748338 5 event.go:217] Event(api.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"e39a0102-1302-11e7-ada4-000c29932415", APIVersion:"v1", ResourceVersion:"26", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rkt-2fbe69d7-891c-45f5-b2a5-d3efa9b69894 became leader Mar 27 15:34:39 bv-vm-00 bootkube-start[3617]: [ 345.982438] bootkube[5]: Creating self-hosted assets... Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.060555] bootkube[5]: I0327 15:34:44.563452 5 log.go:19] secret "kube-apiserver" created Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.060855] bootkube[5]: created kube-apiserver secret Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.099128] bootkube[5]: I0327 15:34:44.602013 5 log.go:19] daemonset "kube-apiserver" created Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.099997] bootkube[5]: created kube-apiserver daemonset Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.110085] bootkube[5]: I0327 15:34:44.612987 5 log.go:19] poddisruptionbudget "kube-controller-manager" created Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.110469] bootkube[5]: created kube-controller-manager poddisruptionbudget Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.143607] bootkube[5]: I0327 15:34:44.646501 5 log.go:19] secret "kube-controller-manager" created Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.144074] bootkube[5]: created kube-controller-manager secret Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.183219] bootkube[5]: I0327 15:34:44.686127 5 log.go:19] deployment "kube-controller-manager" created Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.183531] bootkube[5]: created kube-controller-manager deployment Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.225941] bootkube[5]: I0327 15:34:44.728845 5 log.go:19] deployment "kube-dns" created Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.226309] bootkube[5]: created kube-dns deployment Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.279818] bootkube[5]: I0327 15:34:44.782733 5 log.go:19] service "kube-dns" created Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.280214] bootkube[5]: created kube-dns service Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.313626] bootkube[5]: I0327 15:34:44.816487 5 log.go:19] configmap "kube-flannel-cfg" created Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.314133] bootkube[5]: created kube-flannel-cfg configmap Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.336579] bootkube[5]: I0327 15:34:44.839486 5 log.go:19] daemonset "kube-flannel" created Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.337018] bootkube[5]: created kube-flannel daemonset Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.374653] bootkube[5]: I0327 15:34:44.877558 5 log.go:19] daemonset "kube-proxy" created Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.374998] bootkube[5]: created kube-proxy daemonset Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.380287] bootkube[5]: I0327 15:34:44.883190 5 log.go:19] poddisruptionbudget "kube-scheduler" created Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.380860] bootkube[5]: created kube-scheduler poddisruptionbudget Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.401843] bootkube[5]: I0327 15:34:44.904751 5 log.go:19] deployment "kube-scheduler" created Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.402224] bootkube[5]: created kube-scheduler deployment Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.417668] bootkube[5]: I0327 15:34:44.920590 5 log.go:19] clusterrolebinding "system:default-sa" created Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.417960] bootkube[5]: created system:default-sa clusterrolebinding Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.439120] bootkube[5]: I0327 15:34:44.942038 5 log.go:19] daemonset "checkpoint-installer" created Mar 27 15:34:44 bv-vm-00 bootkube-start[3617]: [ 351.439566] bootkube[5]: created checkpoint-installer daemonset Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.440471] bootkube[5]: I0327 15:34:47.943341 5 cidr_allocator.go:92] No Service CIDR provided. Skipping filtering out service addresses. Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.440925] bootkube[5]: I0327 15:34:47.943365 5 cidr_allocator.go:98] Node bv-vm-00 has no CIDR, ignoring Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.441262] bootkube[5]: I0327 15:34:47.943370 5 cidr_allocator.go:98] Node bv-vm-01 has no CIDR, ignoring Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.441600] bootkube[5]: E0327 15:34:47.943577 5 controllermanager.go:305] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail. Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.441895] bootkube[5]: I0327 15:34:47.943601 5 controllermanager.go:322] Will not configure cloud provider routes for allocate-node-cidrs: true, configure-cloud-routes: false. Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.442208] bootkube[5]: E0327 15:34:47.943901 5 util.go:45] Metric for replenishment_controller already registered Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.442596] bootkube[5]: E0327 15:34:47.943913 5 util.go:45] Metric for replenishment_controller already registered Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.442942] bootkube[5]: E0327 15:34:47.943919 5 util.go:45] Metric for replenishment_controller already registered Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.443298] bootkube[5]: E0327 15:34:47.943929 5 util.go:45] Metric for replenishment_controller already registered Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.443646] bootkube[5]: E0327 15:34:47.943936 5 util.go:45] Metric for replenishment_controller already registered Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.451034] bootkube[5]: I0327 15:34:47.953930 5 controllermanager.go:403] Starting extensions/v1beta1 apis Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.451515] bootkube[5]: I0327 15:34:47.953965 5 controllermanager.go:406] Starting daemon set controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.451987] bootkube[5]: I0327 15:34:47.954246 5 controllermanager.go:413] Starting job controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.452439] bootkube[5]: I0327 15:34:47.954321 5 daemoncontroller.go:192] Starting Daemon Sets controller manager Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.452897] bootkube[5]: I0327 15:34:47.954524 5 controllermanager.go:420] Starting deployment controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.453324] bootkube[5]: I0327 15:34:47.954742 5 controllermanager.go:427] Starting ReplicaSet controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.453732] bootkube[5]: I0327 15:34:47.954829 5 deployment_controller.go:132] Starting deployment controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.454168] bootkube[5]: I0327 15:34:47.955029 5 replica_set.go:162] Starting ReplicaSet controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.454585] bootkube[5]: I0327 15:34:47.954963 5 controllermanager.go:436] Attempting to start horizontal pod autoscaler controller, full resource map map[apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: {serviceaccounts true Servic Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.456051] bootkube[5]: eAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} batch/v2alpha1:&APIResourceList{GroupVersion:batch/v2alpha1,APIResources:[{cronjobs true CronJob} {cronjobs/status true CronJob} {jobs true Job} {jobs/status true Job} {scheduledjobs true ScheduledJob} {scheduledjobs/status true ScheduledJob}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings tr Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: ue RoleBinding} {roles true Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.457371] bootkube[5]: Role}],}] Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.457741] bootkube[5]: I0327 15:34:47.955087 5 controllermanager.go:438] Starting autoscaling/v1 apis Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.458132] bootkube[5]: I0327 15:34:47.955100 5 controllermanager.go:440] Starting horizontal pod controller. Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.458564] bootkube[5]: I0327 15:34:47.955228 5 controllermanager.go:458] Attempting to start disruption controller, full resource map map[rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} batch/v2alpha1:&APIResourceList{GroupVersion:batch/v2alpha1,APIResources:[{cronjobs true CronJob} {cronjobs/status true CronJob} {jobs true Job} {jobs/status true Job} {scheduledjobs true ScheduledJob} {scheduledjobs/status true ScheduledJob}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale tr Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: ue Scale} {thirdpartyresourc Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.460086] bootkube[5]: es false ThirdPartyResource}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses f Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: alse StorageClass}],}] Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.461509] bootkube[5]: I0327 15:34:47.955331 5 controllermanager.go:460] Starting policy/v1beta1 apis Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.462025] bootkube[5]: I0327 15:34:47.955340 5 horizontal.go:132] Starting HPA Controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.462495] bootkube[5]: I0327 15:34:47.955345 5 controllermanager.go:462] Starting disruption controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.463018] bootkube[5]: I0327 15:34:47.955696 5 disruption.go:317] Starting disruption controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.463506] bootkube[5]: I0327 15:34:47.955706 5 disruption.go:319] Sending events to api server. Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.464017] bootkube[5]: I0327 15:34:47.955626 5 controllermanager.go:470] Attempting to start statefulset, full resource map map[autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} storage.k8s.io/v1beta1:&APIResourceList{GroupVersion:storage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} authentication.k8s.io/v1beta1:&APIResou Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: rceList{GroupVersion:authent Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.465436] bootkube[5]: ication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} batch/v2alpha1:&APIResourceList{GroupVersion:batch/v2alpha1,APIResources:[{cronjobs true CronJob} {cronjobs/status true CronJob} {jobs true Job} {jobs/status true Job} {scheduledjobs true ScheduledJob} {scheduledjobs/status true ScheduledJob}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdPartyResource}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status t Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: rue Job}],}] Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.466941] bootkube[5]: I0327 15:34:47.955728 5 controllermanager.go:472] Starting apps/v1beta1 apis Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.467451] bootkube[5]: I0327 15:34:47.955741 5 controllermanager.go:474] Starting StatefulSet controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.467882] bootkube[5]: I0327 15:34:47.956029 5 controllermanager.go:488] Starting batch/v2alpha1 apis Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.468309] bootkube[5]: I0327 15:34:47.956045 5 controllermanager.go:490] Starting cronjob controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.468696] bootkube[5]: I0327 15:34:47.956344 5 pet_set.go:146] Starting statefulset controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.469118] bootkube[5]: I0327 15:34:47.957807 5 controller.go:91] Starting CronJob Manager Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.473762] bootkube[5]: I0327 15:34:47.976560 5 controllermanager.go:544] Attempting to start certificates, full resource map map[v1:&APIResourceList{GroupVersion:v1,APIResources:[{bindings true Binding} {componentstatuses false ComponentStatus} {configmaps true ConfigMap} {endpoints true Endpoints} {events true Event} {limitranges true LimitRange} {namespaces false Namespace} {namespaces/finalize false Namespace} {namespaces/status false Namespace} {nodes false Node} {nodes/proxy false Node} {nodes/status false Node} {persistentvolumeclaims true PersistentVolumeClaim} {persistentvolumeclaims/status true PersistentVolumeClaim} {persistentvolumes false PersistentVolume} {persistentvolumes/status false PersistentVolume} {pods true Pod} {pods/attach true Pod} {pods/binding true Binding} {pods/eviction true Eviction} {pods/exec true Pod} {pods/log true Pod} {pods/portforward true Pod} {pods/proxy true Pod} {pods/status true Pod} {podtemplates true PodTemplate} {replicationcontrollers true ReplicationController} {replicationcontrollers/scale true Scale} {replicationcontrollers/status true ReplicationController} {resourcequotas true ResourceQuota} {resourcequotas/status true ResourceQuota} {secrets true Secret} {serviceaccounts true ServiceAccount} {services true Service} {services/proxy true Service} {services/status true Service}],} apps/v1beta1:&APIResourceList{GroupVersion:apps/v1beta1,APIResources:[{statefulsets true StatefulSet} {statefulsets/status true StatefulSet}],} authentication.k8s.io/v1beta1:&APIResourceList{GroupVersion:authentication.k8s.io/v1beta1,APIResources:[{tokenreviews false TokenReview}],} autoscaling/v1:&APIResourceList{GroupVersion:autoscaling/v1,APIResources:[{horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler}],} policy/v1beta1:&APIResourceList{GroupVersion:policy/v1beta1,APIResources:[{poddisruptionbudgets true PodDisruptionBudget} {poddisruptionbudgets/status true PodDisruptionBudget}],} storage.k8s.io/v1beta1:&API Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: ResourceList{GroupVersion:st Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.475536] bootkube[5]: orage.k8s.io/v1beta1,APIResources:[{storageclasses false StorageClass}],} rbac.authorization.k8s.io/v1alpha1:&APIResourceList{GroupVersion:rbac.authorization.k8s.io/v1alpha1,APIResources:[{clusterrolebindings false ClusterRoleBinding} {clusterroles false ClusterRole} {rolebindings true RoleBinding} {roles true Role}],} authorization.k8s.io/v1beta1:&APIResourceList{GroupVersion:authorization.k8s.io/v1beta1,APIResources:[{localsubjectaccessreviews true LocalSubjectAccessReview} {selfsubjectaccessreviews false SelfSubjectAccessReview} {subjectaccessreviews false SubjectAccessReview}],} batch/v1:&APIResourceList{GroupVersion:batch/v1,APIResources:[{jobs true Job} {jobs/status true Job}],} batch/v2alpha1:&APIResourceList{GroupVersion:batch/v2alpha1,APIResources:[{cronjobs true CronJob} {cronjobs/status true CronJob} {jobs true Job} {jobs/status true Job} {scheduledjobs true ScheduledJob} {scheduledjobs/status true ScheduledJob}],} certificates.k8s.io/v1alpha1:&APIResourceList{GroupVersion:certificates.k8s.io/v1alpha1,APIResources:[{certificatesigningrequests false CertificateSigningRequest} {certificatesigningrequests/approval false CertificateSigningRequest} {certificatesigningrequests/status false CertificateSigningRequest}],} extensions/v1beta1:&APIResourceList{GroupVersion:extensions/v1beta1,APIResources:[{daemonsets true DaemonSet} {daemonsets/status true DaemonSet} {deployments true Deployment} {deployments/rollback true DeploymentRollback} {deployments/scale true Scale} {deployments/status true Deployment} {horizontalpodautoscalers true HorizontalPodAutoscaler} {horizontalpodautoscalers/status true HorizontalPodAutoscaler} {ingresses true Ingress} {ingresses/status true Ingress} {jobs true Job} {jobs/status true Job} {networkpolicies true NetworkPolicy} {replicasets true ReplicaSet} {replicasets/scale true Scale} {replicasets/status true ReplicaSet} {replicationcontrollers true ReplicationControllerDummy} {replicationcontrollers/scale true Scale} {thirdpartyresources false ThirdParty Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: Resource}],}] Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.476840] bootkube[5]: I0327 15:34:47.976682 5 attach_detach_controller.go:204] Starting Attach Detach Controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.477331] bootkube[5]: I0327 15:34:47.976697 5 controllermanager.go:546] Starting certificates.k8s.io/v1alpha1 apis Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.477811] bootkube[5]: I0327 15:34:47.976710 5 controllermanager.go:548] Starting certificate request controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.478294] bootkube[5]: E0327 15:34:47.978181 5 controllermanager.go:558] Failed to start certificate controller: open /etc/kubernetes/ca/ca.pem: no such file or directory Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.478754] bootkube[5]: E0327 15:34:47.978401 5 util.go:45] Metric for serviceaccount_controller already registered Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.479239] bootkube[5]: I0327 15:34:47.978538 5 serviceaccounts_controller.go:120] Starting ServiceAccount controller Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.482438] bootkube[5]: I0327 15:34:47.985346 5 garbagecollector.go:766] Garbage Collector: Initializing Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.483392] bootkube[5]: E0327 15:34:47.986296 5 actual_state_of_world.go:462] Failed to set statusUpdateNeeded to needed true because nodeName="bv-vm-00" does not exist Mar 27 15:34:47 bv-vm-00 bootkube-start[3617]: [ 354.483858] bootkube[5]: E0327 15:34:47.986320 5 actual_state_of_world.go:462] Failed to set statusUpdateNeeded to needed true because nodeName="bv-vm-01" does not exist Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.541882] bootkube[5]: I0327 15:34:48.044768 5 nodecontroller.go:429] Initializing eviction metric for zone: Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.542504] bootkube[5]: W0327 15:34:48.044810 5 nodecontroller.go:678] Missing timestamp for Node bv-vm-00. Assuming now as a timestamp. Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.542987] bootkube[5]: W0327 15:34:48.044851 5 nodecontroller.go:678] Missing timestamp for Node bv-vm-01. Assuming now as a timestamp. Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.543479] bootkube[5]: I0327 15:34:48.044881 5 nodecontroller.go:608] NodeController detected that zone is now in state Normal. Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.543932] bootkube[5]: I0327 15:34:48.044900 5 event.go:217] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"bv-vm-00", UID:"e3e5b4b0-1302-11e7-ada4-000c29932415", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node bv-vm-00 event: Registered Node bv-vm-00 in NodeController Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.544366] bootkube[5]: I0327 15:34:48.044923 5 event.go:217] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"bv-vm-01", UID:"e50e726f-1302-11e7-ada4-000c29932415", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node bv-vm-01 event: Registered Node bv-vm-01 in NodeController Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.555438] bootkube[5]: I0327 15:34:48.058323 5 event.go:217] Event(api.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-dns", UID:"e72a725f-1302-11e7-ada4-000c29932415", APIVersion:"extensions", ResourceVersion:"85", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-dns-4101612645 to 1 Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.556150] bootkube[5]: I0327 15:34:48.059045 5 event.go:217] Event(api.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-scheduler", UID:"e7455f99-1302-11e7-ada4-000c29932415", APIVersion:"extensions", ResourceVersion:"95", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-scheduler-3027616201 to 2 Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.563739] bootkube[5]: I0327 15:34:48.066637 5 event.go:217] Event(api.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-controller-manager", UID:"e723fc32-1302-11e7-ada4-000c29932415", APIVersion:"extensions", ResourceVersion:"84", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-controller-manager-1472766980 to 2 Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.662952] bootkube[5]: I0327 15:34:48.165828 5 event.go:217] Event(api.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-apiserver", UID:"e7172026-1302-11e7-ada4-000c29932415", APIVersion:"extensions", ResourceVersion:"79", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-apiserver-f8tfb Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.671393] bootkube[5]: I0327 15:34:48.174296 5 event.go:217] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-controller-manager-1472766980", UID:"e9268954-1302-11e7-ada4-000c29932415", APIVersion:"extensions", ResourceVersion:"146", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-controller-manager-1472766980-8nn9q Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.679666] bootkube[5]: I0327 15:34:48.182576 5 event.go:217] Event(api.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-flannel", UID:"e73b641e-1302-11e7-ada4-000c29932415", APIVersion:"extensions", ResourceVersion:"91", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-flannel-b797w Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.680013] bootkube[5]: I0327 15:34:48.182921 5 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-1472766980-8nn9q", UID:"e9381b8b-1302-11e7-ada4-000c29932415", APIVersion:"v1", ResourceVersion:"166", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-controller-manager-1472766980-8nn9q to bv-vm-00 Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.680322] bootkube[5]: I0327 15:34:48.182931 5 event.go:217] Event(api.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-flannel", UID:"e73b641e-1302-11e7-ada4-000c29932415", APIVersion:"extensions", ResourceVersion:"91", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-flannel-zs1mz Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.687875] bootkube[5]: I0327 15:34:48.190791 5 event.go:217] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-scheduler-3027616201", UID:"e9267f28-1302-11e7-ada4-000c29932415", APIVersion:"extensions", ResourceVersion:"145", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-scheduler-3027616201-h1mgl Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.688894] bootkube[5]: I0327 15:34:48.191807 5 event.go:217] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-controller-manager-1472766980", UID:"e9268954-1302-11e7-ada4-000c29932415", APIVersion:"extensions", ResourceVersion:"146", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-controller-manager-1472766980-wwl88 Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.697015] bootkube[5]: I0327 15:34:48.199864 5 event.go:217] Event(api.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"e7413b07-1302-11e7-ada4-000c29932415", APIVersion:"extensions", ResourceVersion:"93", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-dnrs7 Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.697324] bootkube[5]: I0327 15:34:48.200023 5 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-scheduler-3027616201-h1mgl", UID:"e939cbee-1302-11e7-ada4-000c29932415", APIVersion:"v1", ResourceVersion:"172", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-scheduler-3027616201-h1mgl to bv-vm-00 Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.697602] bootkube[5]: I0327 15:34:48.200443 5 event.go:217] Event(api.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"e7413b07-1302-11e7-ada4-000c29932415", APIVersion:"extensions", ResourceVersion:"93", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-548kk Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.700107] bootkube[5]: I0327 15:34:48.200049 5 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-controller-manager-1472766980-wwl88", UID:"e93a166d-1302-11e7-ada4-000c29932415", APIVersion:"v1", ResourceVersion:"174", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-controller-manager-1472766980-wwl88 to bv-vm-00 Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.707674] bootkube[5]: I0327 15:34:48.210582 5 event.go:217] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-dns-4101612645", UID:"e926808a-1302-11e7-ada4-000c29932415", APIVersion:"extensions", ResourceVersion:"144", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-dns-4101612645-qzxn5 Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.721732] bootkube[5]: I0327 15:34:48.224618 5 event.go:217] Event(api.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"checkpoint-installer", UID:"e74b0e47-1302-11e7-ada4-000c29932415", APIVersion:"extensions", ResourceVersion:"97", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: checkpoint-installer-qdsmm Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.728369] bootkube[5]: I0327 15:34:48.231283 5 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-4101612645-qzxn5", UID:"e93cc712-1302-11e7-ada4-000c29932415", APIVersion:"v1", ResourceVersion:"190", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-dns-4101612645-qzxn5 to bv-vm-01 Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.735261] bootkube[5]: E0327 15:34:48.238172 5 daemoncontroller.go:225] kube-system/checkpoint-installer failed with : error storing status for daemon set &extensions.DaemonSet{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:api.ObjectMeta{Name:"checkpoint-installer", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/extensions/v1beta1/namespaces/kube-system/daemonsets/checkpoint-installer", UID:"e74b0e47-1302-11e7-ada4-000c29932415", ResourceVersion:"97", Generation:1, CreationTimestamp:unversioned.Time{Time:time.Time{sec:63626225684, nsec:0, loc:(*time.Location)(0x5c6a840)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"pod-checkpoint-installer"}, Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:extensions.DaemonSetSpec{Selector:(*unversioned.LabelSelector)(0xc4227a3a40), Template:api.PodTemplateSpec{ObjectMeta:api.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:unversioned.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*unversioned.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"pod-checkpoint-installer"}, Annotations:map[string]string(nil), OwnerReferences:[]api.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:api.PodSpec{Volumes:[]api.Volume{api.Volume{Name:"etc-k8s-manifests", VolumeSource:api.VolumeSource{HostPath:(*api.HostPathVolumeSource)(0xc422fe0340), EmptyDir:(*api.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*api.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*api.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*api.GitRepoVolumeSource)(nil), Secret:(*api.SecretVolumeSource)(nil), NFS:(*api.NFSVolumeSource)(nil), ISCSI:(*api.ISCSIVolumeSource)(nil), Glusterfs:(*api.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*api.Pers Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: istentVolumeClaimVolumeSourc Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.736594] bootkube[5]: e)(nil), RBD:(*api.RBDVolumeSource)(nil), Quobyte:(*api.QuobyteVolumeSource)(nil), FlexVolume:(*api.FlexVolumeSource)(nil), Cinder:(*api.CinderVolumeSource)(nil), CephFS:(*api.CephFSVolumeSource)(nil), Flocker:(*api.FlockerVolumeSource)(nil), DownwardAPI:(*api.DownwardAPIVolumeSource)(nil), FC:(*api.FCVolumeSource)(nil), AzureFile:(*api.AzureFileVolumeSource)(nil), ConfigMap:(*api.ConfigMapVolumeSource)(nil), VsphereVolume:(*api.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*api.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*api.PhotonPersistentDiskVolumeSource)(nil)}}}, InitContainers:[]api.Container(nil), Containers:[]api.Container{api.Container{Name:"checkpoint-installer", Image:"quay.io/coreos/pod-checkpointer:417b8f7552ccf3db192ba1e5472e524848f0eb5f", Command:[]string{"/checkpoint-installer.sh"}, Args:[]string(nil), WorkingDir:"", Ports:[]api.ContainerPort(nil), Env:[]api.EnvVar(nil), Resources:api.ResourceRequirements{Limits:api.ResourceList(nil), Requests:api.ResourceList(nil)}, VolumeMounts:[]api.VolumeMount{api.VolumeMount{Name:"etc-k8s-manifests", ReadOnly:false, MountPath:"/etc/kubernetes/manifests", SubPath:""}}, LivenessProbe:(*api.Probe)(nil), ReadinessProbe:(*api.Probe)(nil), Lifecycle:(*api.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", ImagePullPolicy:"IfNotPresent", SecurityContext:(*api.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc422fe0370), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"master":"true"}, ServiceAccountName:"", NodeName:"", SecurityContext:(*api.PodSecurityContext)(0xc422c3da80), ImagePullSecrets:[]api.LocalObjectReference(nil), Hostname:"", Subdomain:""}}}, Status:extensions.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0}}: Operation cannot be fulfilled on daemonsets.extensions "checkpoint-installer": the object has been modified; pleas Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: e apply your changes to the Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.737719] bootkube[5]: latest version and try again Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.739928] bootkube[5]: I0327 15:34:48.242838 5 event.go:217] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-scheduler-3027616201", UID:"e9267f28-1302-11e7-ada4-000c29932415", APIVersion:"extensions", ResourceVersion:"145", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-scheduler-3027616201-cvrzv Mar 27 15:34:48 bv-vm-00 bootkube-start[3617]: [ 354.748390] bootkube[5]: I0327 15:34:48.251308 5 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-scheduler-3027616201-cvrzv", UID:"e941efb7-1302-11e7-ada4-000c29932415", APIVersion:"v1", ResourceVersion:"211", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-scheduler-3027616201-cvrzv to bv-vm-00 Mar 27 15:34:49 bv-vm-00 bootkube-start[3617]: [ 355.981397] bootkube[5]: Pod Status: pod-checkpointer DoesNotExist Mar 27 15:34:49 bv-vm-00 bootkube-start[3617]: [ 355.981698] bootkube[5]: Pod Status: kube-apiserver Pending Mar 27 15:34:49 bv-vm-00 bootkube-start[3617]: [ 355.981892] bootkube[5]: Pod Status: kube-scheduler Pending Mar 27 15:34:49 bv-vm-00 bootkube-start[3617]: [ 355.982103] bootkube[5]: Pod Status: kube-controller-manager Pending Mar 27 15:34:58 bv-vm-00 bootkube-start[3617]: [ 364.482631] bootkube[5]: I0327 15:34:57.985509 5 garbagecollector.go:780] Garbage Collector: All monitored resources synced. Proceeding to collect garbage Mar 27 15:37:04 bv-vm-00 bootkube-start[3617]: [ 490.981466] bootkube[5]: Pod Status: kube-apiserver Running Mar 27 15:37:04 bv-vm-00 bootkube-start[3617]: [ 491.039641] bootkube[5]: Pod Status: kube-scheduler Pending Mar 27 15:37:04 bv-vm-00 bootkube-start[3617]: [ 491.039882] bootkube[5]: Pod Status: kube-controller-manager Pending Mar 27 15:37:04 bv-vm-00 bootkube-start[3617]: [ 491.040093] bootkube[5]: Pod Status: pod-checkpointer DoesNotExist Mar 27 15:37:29 bv-vm-00 bootkube-start[3617]: [ 515.981450] bootkube[5]: Pod Status: pod-checkpointer Pending Mar 27 15:37:29 bv-vm-00 bootkube-start[3617]: [ 515.981827] bootkube[5]: Pod Status: kube-apiserver Running Mar 27 15:37:29 bv-vm-00 bootkube-start[3617]: [ 515.982036] bootkube[5]: Pod Status: kube-scheduler Pending Mar 27 15:37:29 bv-vm-00 bootkube-start[3617]: [ 515.982221] bootkube[5]: Pod Status: kube-controller-manager Pending Mar 27 15:37:39 bv-vm-00 bootkube-start[3617]: [ 525.981528] bootkube[5]: Pod Status: kube-scheduler Pending Mar 27 15:37:39 bv-vm-00 bootkube-start[3617]: [ 525.982167] bootkube[5]: Pod Status: kube-controller-manager Pending Mar 27 15:37:39 bv-vm-00 bootkube-start[3617]: [ 525.982448] bootkube[5]: Pod Status: pod-checkpointer Running Mar 27 15:37:39 bv-vm-00 bootkube-start[3617]: [ 525.982715] bootkube[5]: Pod Status: kube-apiserver Running Mar 27 15:37:44 bv-vm-00 bootkube-start[3617]: [ 530.981435] bootkube[5]: Pod Status: pod-checkpointer Running Mar 27 15:37:44 bv-vm-00 bootkube-start[3617]: [ 530.981791] bootkube[5]: Pod Status: kube-apiserver Running Mar 27 15:37:44 bv-vm-00 bootkube-start[3617]: [ 530.981965] bootkube[5]: Pod Status: kube-scheduler Running Mar 27 15:37:44 bv-vm-00 bootkube-start[3617]: [ 530.982169] bootkube[5]: Pod Status: kube-controller-manager Running Mar 27 15:37:44 bv-vm-00 bootkube-start[3617]: [ 530.982328] bootkube[5]: All self-hosted control plane components successfully started Mar 27 15:37:44 bv-vm-00 bootkube-start[3617]: Waiting for Kubernetes API... Mar 27 15:37:49 bv-vm-00 bootkube-start[3617]: Waiting for Kubernetes API... Mar 27 15:37:49 bv-vm-00 bootkube-start[3617]: { Mar 27 15:37:49 bv-vm-00 bootkube-start[3617]: "major": "1", Mar 27 15:37:49 bv-vm-00 bootkube-start[3617]: "minor": "5", Mar 27 15:37:49 bv-vm-00 bootkube-start[3617]: "gitVersion": "v1.5.5+coreos.0", Mar 27 15:37:49 bv-vm-00 bootkube-start[3617]: "gitCommit": "e2cd7718ba844e874a68aa13f8f5bb0728b415bb", Mar 27 15:37:49 bv-vm-00 bootkube-start[3617]: "gitTreeState": "clean", Mar 27 15:37:49 bv-vm-00 bootkube-start[3617]: "buildDate": "2017-03-22T01:59:35Z", Mar 27 15:37:49 bv-vm-00 bootkube-start[3617]: "goVersion": "go1.7.4", Mar 27 15:37:49 bv-vm-00 bootkube-start[3617]: "compiler": "gc", Mar 27 15:37:49 bv-vm-00 bootkube-start[3617]: "platform": "linux/amd64" Mar 27 15:37:49 bv-vm-00 bootkube-start[3617]: } Mar 27 15:37:49 bv-vm-00 bootkube-start[3617]: Waiting for Kubernetes components... Mar 27 15:37:59 bv-vm-00 bootkube-start[3617]: Creating Heapster Mar 27 15:37:59 bv-vm-00 bootkube-start[3617]: Creating Tectonic Namespace Mar 27 15:37:59 bv-vm-00 bootkube-start[3617]: Creating Initial Roles Mar 27 15:37:59 bv-vm-00 bootkube-start[3617]: Creating Tectonic ConfigMap Mar 27 15:37:59 bv-vm-00 bootkube-start[3617]: Creating Tectonic Secrets Mar 27 15:37:59 bv-vm-00 bootkube-start[3617]: Creating Tectonic Identity Mar 27 15:37:59 bv-vm-00 bootkube-start[3617]: Creating Tectonic Console Mar 27 15:37:59 bv-vm-00 bootkube-start[3617]: Creating Tectonic Monitoring Mar 27 15:38:00 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:03 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:06 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:09 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:12 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:15 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:18 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:21 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:24 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:27 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:30 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:33 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:36 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:39 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:42 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:45 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:48 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:51 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:51 bv-vm-00 bootkube-start[3617]: Creating Ingress Mar 27 15:38:51 bv-vm-00 bootkube-start[3617]: Creating Tectonic Stats Emitter Mar 27 15:38:51 bv-vm-00 bootkube-start[3617]: Creating Tectonic Updater Mar 27 15:38:51 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:51 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:54 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:38:57 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:39:00 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:39:00 bv-vm-00 bootkube-start[3617]: Waiting for third-party resource definitions... Mar 27 15:39:00 bv-vm-00 bootkube-start[3617]: Done

  1. core@bv-vm-00 ~ $ docker ps -a | grep console

113ed91b6684 quay.io/coreos/tectonic-console:v1.1.1 "/opt/bridge/bin/brid" About a minute ago Exited (2) About a minute ago k8s_tectonic-console.c236deed_tectonic-console-1465042875-r8r8g_tectonic-system_67053997-1303-11e7-8d69-000c29932415_d08b5989 532ae44e9f3b gcr.io/google_containers/pause-amd64:3.0 "/pause" 50 minutes ago Up 50 minutes k8s_POD.d8dbe16c_tectonic-console-1465042875-r8r8g_tectonic-system_67053997-1303-11e7-8d69-000c29932415_efe11242

  1. docker logs 113ed91b6684

2017/03/27 16:26:49 http: Provider config sync failed, retrying in 1s: Get https://bv-vm-01/identity/.well-known/openid-configuration: dial tcp: lookup bv-vm-01 on 10.3.0.10:53: server misbehaving 2017/03/27 16:26:51 http: Provider config sync still failing, retrying in 2s: Get https://bv-vm-01/identity/.well-known/openid-configuration: dial tcp: lookup bv-vm-01 on 10.3.0.10:53: server misbehaving 2017/03/27 16:26:53 http: Provider config sync still failing, retrying in 4s: Get https://bv-vm-01/identity/.well-known/openid-configuration: dial tcp: lookup bv-vm-01 on 10.3.0.10:53: server misbehaving 2017/03/27 16:26:57 http: Provider config sync still failing, retrying in 8s: Get https://bv-vm-01/identity/.well-known/openid-configuration: dial tcp: lookup bv-vm-01 on 10.3.0.10:53: server misbehaving 2017/03/27 16:27:05 http: Provider config sync still failing, retrying in 16s: Get https://bv-vm-01/identity/.well-known/openid-configuration: dial tcp: lookup bv-vm-01 on 10.3.0.10:53: server misbehaving

  1. kubectl describe pod tectonic-console-1465042875-hlzcd tectonic-console-1465042875-r8r8g -n tectonic-system

`Name: tectonic-console-1465042875-hlzcd Namespace: tectonic-system Node: bv-vm-01/172.17.0.63 Start Time: Mon, 27 Mar 2017 15:38:19 +0000 Labels: app=tectonic-console component=ui pod-template-hash=1465042875 Status: Running IP: 10.2.1.6 Controllers: ReplicaSet/tectonic-console-1465042875 Containers: tectonic-console: Container ID: docker://b6c34451e8d4d0472d0f11540c276868e5e489588f976a25037554623472522f Image: quay.io/coreos/tectonic-console:v1.1.1 Image ID: docker-pullable://quay.io/coreos/tectonic-console@sha256:7fd476e9a28f7b8f267b8fea9636125e3dfcc82a8a93ce586b1d63e903d77ccb Port: 80/TCP Command: /opt/bridge/bin/bridge Limits: cpu: 100m memory: 50Mi Requests: cpu: 100m memory: 50Mi State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 2 Started: Mon, 27 Mar 2017 16:29:39 +0000 Finished: Mon, 27 Mar 2017 16:30:09 +0000 Ready: False Restart Count: 21 Liveness: http-get http://:80/health delay=30s timeout=1s period=10s #success=1 #failure=3 Volume Mounts: /etc/ssl/certs from ssl-certs-host (ro) /etc/tectonic-ca-cert-secret from tectonic-ca-cert-secret (ro) /etc/tectonic-identity-grpc-client-secret from tectonic-identity-grpc-client-secret (ro) /etc/tectonic/licenses from tectonic-license (ro) /usr/share/ca-certificates from ca-certs-host (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-z8c4s (ro) Environment Variables: BRIDGE_K8S_MODE: in-cluster BRIDGE_K8S_AUTH: oidc BRIDGE_K8S_PUBLIC_ENDPOINT: https://bv-vm-00:443 BRIDGE_LISTEN: http://0.0.0.0:80 BRIDGE_BASE_ADDRESS: https://bv-vm-01 BRIDGE_BASE_PATH: / BRIDGE_PUBLIC_DIR: /opt/bridge/static BRIDGE_USER_AUTH: oidc BRIDGE_USER_AUTH_OIDC_ISSUER_URL: https://bv-vm-01/identity BRIDGE_USER_AUTH_OIDC_CLIENT_ID: tectonic-console BRIDGE_USER_AUTH_OIDC_CLIENT_SECRET: U64m3nH-AxcuN8WzhE6DHg BRIDGE_KUBECTL_CLIENT_ID: tectonic-kubectl BRIDGE_KUBECTL_CLIENT_SECRET: 7ldkH5_7aWqZ6DmE8mp75Q BRIDGE_TECTONIC_VERSION: 1.5.5-tectonic.2 BRIDGE_CA_FILE: /etc/tectonic-ca-cert-secret/ca-cert BRIDGE_LICENSE_FILE: /etc/tectonic/licenses/license Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: tectonic-ca-cert-secret: Type: Secret (a volume populated by a Secret) SecretName: tectonic-ca-cert-secret ssl-certs-host: Type: HostPath (bare host directory volume) Path: /etc/ssl/certs ca-certs-host: Type: HostPath (bare host directory volume) Path: /usr/share/ca-certificates tectonic-license: Type: Secret (a volume populated by a Secret) SecretName: tectonic-license tectonic-identity-grpc-client-secret: Type: Secret (a volume populated by a Secret) SecretName: tectonic-identity-grpc-client-secret default-token-z8c4s: Type: Secret (a volume populated by a Secret) SecretName: default-token-z8c4s QoS Class: Guaranteed Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message


52m 52m 1 {default-scheduler } Normal Scheduled Successfully assigned tectonic-console-1465042875-hlzcd to bv-vm-01 52m 52m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Pulling pulling image "quay.io/coreos/tectonic-console:v1.1.1" 51m 51m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Pulled Successfully pulled image "quay.io/coreos/tectonic-console:v1.1.1" 51m 51m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Created Created container with docker id 50ce7cfacf2b; Security:[seccomp=unconfined] 51m 51m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Started Started container with docker id 50ce7cfacf2b 50m 50m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Killing Killing container with docker id 50ce7cfacf2b: pod "tectonic-console-1465042875-hlzcd_tectonic-system(670541e7-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 50m 50m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Created Created container with docker id 1969e60432b3; Security:[seccomp=unconfined] 50m 50m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Started Started container with docker id 1969e60432b3 50m 50m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Created Created container with docker id 2eb5ea6bd436; Security:[seccomp=unconfined] 50m 50m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Started Started container with docker id 2eb5ea6bd436 50m 50m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Killing Killing container with docker id 1969e60432b3: pod "tectonic-console-1465042875-hlzcd_tectonic-system(670541e7-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 49m 49m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Created Created container with docker id ec09411fe566; Security:[seccomp=unconfined] 49m 49m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Started Started container with docker id ec09411fe566 49m 49m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Killing Killing container with docker id 2eb5ea6bd436: pod "tectonic-console-1465042875-hlzcd_tectonic-system(670541e7-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 49m 49m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Created Created container with docker id 02282941c7e0; Security:[seccomp=unconfined] 49m 49m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Started Started container with docker id 02282941c7e0 49m 49m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Killing Killing container with docker id ec09411fe566: pod "tectonic-console-1465042875-hlzcd_tectonic-system(670541e7-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 48m 48m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Killing Killing container with docker id 02282941c7e0: pod "tectonic-console-1465042875-hlzcd_tectonic-system(670541e7-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 48m 48m 4 {kubelet bv-vm-01} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "tectonic-console" with CrashLoopBackOff: "Back-off 40s restarting failed container=tectonic-console pod=tectonic-console-1465042875-hlzcd_tectonic-system(670541e7-1303-11e7-8d69-000c29932415)"

48m 48m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Created Created container with docker id 5c5c5ed2bc65; Security:[seccomp=unconfined] 48m 48m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Started Started container with docker id 5c5c5ed2bc65 47m 47m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Killing Killing container with docker id 5c5c5ed2bc65: pod "tectonic-console-1465042875-hlzcd_tectonic-system(670541e7-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 47m 47m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Created Created container with docker id c80052ffa840; Security:[seccomp=unconfined] 47m 47m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Started Started container with docker id c80052ffa840 47m 47m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Killing Killing container with docker id c80052ffa840: pod "tectonic-console-1465042875-hlzcd_tectonic-system(670541e7-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 47m 44m 13 {kubelet bv-vm-01} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "tectonic-console" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=tectonic-console pod=tectonic-console-1465042875-hlzcd_tectonic-system(670541e7-1303-11e7-8d69-000c29932415)"

44m 44m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Started Started container with docker id ba9692a0d509 44m 44m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Created Created container with docker id ba9692a0d509; Security:[seccomp=unconfined] 43m 43m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Killing Killing container with docker id ba9692a0d509: pod "tectonic-console-1465042875-hlzcd_tectonic-system(670541e7-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 38m 38m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Created Created container with docker id fc44e865a78c; Security:[seccomp=unconfined] 38m 38m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Started Started container with docker id fc44e865a78c 38m 38m 1 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Killing Killing container with docker id fc44e865a78c: pod "tectonic-console-1465042875-hlzcd_tectonic-system(670541e7-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 50m 1m 21 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Pulled Container image "quay.io/coreos/tectonic-console:v1.1.1" already present on machine 38m 1m 13 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Created (events with common reason combined) 38m 1m 13 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Started (events with common reason combined) 51m 44s 24 {kubelet bv-vm-01} spec.containers{tectonic-console} Warning Unhealthy Liveness probe failed: Get http://10.2.1.6:80/health: dial tcp 10.2.1.6:80: getsockopt: connection refused 37m 44s 13 {kubelet bv-vm-01} spec.containers{tectonic-console} Normal Killing (events with common reason combined) 43m 7s 172 {kubelet bv-vm-01} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "tectonic-console" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=tectonic-console pod=tectonic-console-1465042875-hlzcd_tectonic-system(670541e7-1303-11e7-8d69-000c29932415)"

48m 7s 189 {kubelet bv-vm-01} spec.containers{tectonic-console} Warning BackOff Back-off restarting failed docker container

Name: tectonic-console-1465042875-r8r8g Namespace: tectonic-system Node: bv-vm-00/172.17.0.62 Start Time: Mon, 27 Mar 2017 15:38:19 +0000 Labels: app=tectonic-console component=ui pod-template-hash=1465042875 Status: Running IP: 10.2.0.6 Controllers: ReplicaSet/tectonic-console-1465042875 Containers: tectonic-console: Container ID: docker://113ed91b6684bf508fa07e532ebfe16cc8dc3b46137b2c24bd733e78203cb147 Image: quay.io/coreos/tectonic-console:v1.1.1 Image ID: docker-pullable://quay.io/coreos/tectonic-console@sha256:7fd476e9a28f7b8f267b8fea9636125e3dfcc82a8a93ce586b1d63e903d77ccb Port: 80/TCP Command: /opt/bridge/bin/bridge Limits: cpu: 100m memory: 50Mi Requests: cpu: 100m memory: 50Mi State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 2 Started: Mon, 27 Mar 2017 16:26:49 +0000 Finished: Mon, 27 Mar 2017 16:27:19 +0000 Ready: False Restart Count: 3 Liveness: http-get http://:80/health delay=30s timeout=1s period=10s #success=1 #failure=3 Volume Mounts: /etc/ssl/certs from ssl-certs-host (ro) /etc/tectonic-ca-cert-secret from tectonic-ca-cert-secret (ro) /etc/tectonic-identity-grpc-client-secret from tectonic-identity-grpc-client-secret (ro) /etc/tectonic/licenses from tectonic-license (ro) /usr/share/ca-certificates from ca-certs-host (ro) /var/run/secrets/kubernetes.io/serviceaccount from default-token-z8c4s (ro) Environment Variables: BRIDGE_K8S_MODE: in-cluster BRIDGE_K8S_AUTH: oidc BRIDGE_K8S_PUBLIC_ENDPOINT: https://bv-vm-00:443 BRIDGE_LISTEN: http://0.0.0.0:80 BRIDGE_BASE_ADDRESS: https://bv-vm-01 BRIDGE_BASE_PATH: / BRIDGE_PUBLIC_DIR: /opt/bridge/static BRIDGE_USER_AUTH: oidc BRIDGE_USER_AUTH_OIDC_ISSUER_URL: https://bv-vm-01/identity BRIDGE_USER_AUTH_OIDC_CLIENT_ID: tectonic-console BRIDGE_USER_AUTH_OIDC_CLIENT_SECRET: U64m3nH-AxcuN8WzhE6DHg BRIDGE_KUBECTL_CLIENT_ID: tectonic-kubectl BRIDGE_KUBECTL_CLIENT_SECRET: 7ldkH5_7aWqZ6DmE8mp75Q BRIDGE_TECTONIC_VERSION: 1.5.5-tectonic.2 BRIDGE_CA_FILE: /etc/tectonic-ca-cert-secret/ca-cert BRIDGE_LICENSE_FILE: /etc/tectonic/licenses/license Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: tectonic-ca-cert-secret: Type: Secret (a volume populated by a Secret) SecretName: tectonic-ca-cert-secret ssl-certs-host: Type: HostPath (bare host directory volume) Path: /etc/ssl/certs ca-certs-host: Type: HostPath (bare host directory volume) Path: /usr/share/ca-certificates tectonic-license: Type: Secret (a volume populated by a Secret) SecretName: tectonic-license tectonic-identity-grpc-client-secret: Type: Secret (a volume populated by a Secret) SecretName: tectonic-identity-grpc-client-secret default-token-z8c4s: Type: Secret (a volume populated by a Secret) SecretName: default-token-z8c4s QoS Class: Guaranteed Tolerations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message


52m 52m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Pulling pulling image "quay.io/coreos/tectonic-console:v1.1.1" 52m 52m 1 {default-scheduler } Normal Scheduled Successfully assigned tectonic-console-1465042875-r8r8g to bv-vm-00 52m 52m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Pulled Successfully pulled image "quay.io/coreos/tectonic-console:v1.1.1" 52m 52m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Created Created container with docker id 26053d361bd1; Security:[seccomp=unconfined] 52m 52m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Started Started container with docker id 26053d361bd1 51m 51m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Killing Killing container with docker id 26053d361bd1: pod "tectonic-console-1465042875-r8r8g_tectonic-system(67053997-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 51m 51m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Created Created container with docker id d6fd37eb49d2; Security:[seccomp=unconfined] 51m 51m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Started Started container with docker id d6fd37eb49d2 50m 50m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Created Created container with docker id 26a980fe4ec8; Security:[seccomp=unconfined] 50m 50m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Started Started container with docker id 26a980fe4ec8 50m 50m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Killing Killing container with docker id d6fd37eb49d2: pod "tectonic-console-1465042875-r8r8g_tectonic-system(67053997-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 50m 50m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Created Created container with docker id e2b45b0d626d; Security:[seccomp=unconfined] 50m 50m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Started Started container with docker id e2b45b0d626d 50m 50m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Killing Killing container with docker id 26a980fe4ec8: pod "tectonic-console-1465042875-r8r8g_tectonic-system(67053997-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 49m 49m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Created Created container with docker id 159fcff2fa7c; Security:[seccomp=unconfined] 49m 49m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Started Started container with docker id 159fcff2fa7c 49m 49m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Killing Killing container with docker id e2b45b0d626d: pod "tectonic-console-1465042875-r8r8g_tectonic-system(67053997-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 49m 49m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Killing Killing container with docker id 159fcff2fa7c: pod "tectonic-console-1465042875-r8r8g_tectonic-system(67053997-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 49m 48m 4 {kubelet bv-vm-00} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "tectonic-console" with CrashLoopBackOff: "Back-off 40s restarting failed container=tectonic-console pod=tectonic-console-1465042875-r8r8g_tectonic-system(67053997-1303-11e7-8d69-000c29932415)"

48m 48m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Created Created container with docker id cace7cdd2e31; Security:[seccomp=unconfined] 48m 48m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Started Started container with docker id cace7cdd2e31 48m 48m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Killing Killing container with docker id cace7cdd2e31: pod "tectonic-console-1465042875-r8r8g_tectonic-system(67053997-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 48m 48m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Created Created container with docker id dd05115c7e3b; Security:[seccomp=unconfined] 48m 48m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Started Started container with docker id dd05115c7e3b 47m 47m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Killing Killing container with docker id dd05115c7e3b: pod "tectonic-console-1465042875-r8r8g_tectonic-system(67053997-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 47m 45m 14 {kubelet bv-vm-00} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "tectonic-console" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=tectonic-console pod=tectonic-console-1465042875-r8r8g_tectonic-system(67053997-1303-11e7-8d69-000c29932415)"

44m 44m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Started Started container with docker id d7cd523f2408 44m 44m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Created Created container with docker id d7cd523f2408; Security:[seccomp=unconfined] 44m 44m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Killing Killing container with docker id d7cd523f2408: pod "tectonic-console-1465042875-r8r8g_tectonic-system(67053997-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 39m 39m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Created Created container with docker id 32e974866ac8; Security:[seccomp=unconfined] 39m 39m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Started Started container with docker id 32e974866ac8 38m 38m 1 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Killing Killing container with docker id 32e974866ac8: pod "tectonic-console-1465042875-r8r8g_tectonic-system(67053997-1303-11e7-8d69-000c29932415)" container "tectonic-console" is unhealthy, it will be killed and re-created. 51m 4m 22 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Pulled Container image "quay.io/coreos/tectonic-console:v1.1.1" already present on machine 38m 4m 14 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Created (events with common reason combined) 38m 4m 14 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Started (events with common reason combined) 51m 3m 24 {kubelet bv-vm-00} spec.containers{tectonic-console} Warning Unhealthy Liveness probe failed: Get http://10.2.0.6:80/health: dial tcp 10.2.0.6:80: getsockopt: connection refused 37m 3m 13 {kubelet bv-vm-00} spec.containers{tectonic-console} Normal Killing (events with common reason combined) 44m 10s 172 {kubelet bv-vm-00} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "tectonic-console" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=tectonic-console pod=tectonic-console-1465042875-r8r8g_tectonic-system(67053997-1303-11e7-8d69-000c29932415)"

49m 10s 190 {kubelet bv-vm-00} spec.containers{tectonic-console} Warning BackOff Back-off restarting failed docker container `

Other Information

Infrastructure is on ESXi :

1 x VM tectonic - coreos-tectonic (tectonic + matchbox + PXE) 1 x VM controller - bv-vm-00 1 x VM worker - bv-vm-01

tectonic config

{ "dirty": [ "nodetable:Worker:0:name", "matchboxCA", "nodetable:Controller:0:name", "clusterName", "rootDomain", "tectonicLicense", "platformType", "matchboxClientKey", "matchboxHTTP", "nodetable:Controller:0:mac", "pullSecret", "matchboxClientCert", "initial-key", "nodetable:Worker:0:mac", "matchboxRPC", "tectonicOperator" ], "clusterConfig": { "awsSessionToken": "", "dryRun": false, "awsVpcCIDR": "10.0.0.0/16", "awsWorkerSubnetIds": {}, "controllerDomain": "bv-vm-00", "awsDomain": "", "awsControllerSubnetIds": {}, "matchboxCA": "xxxxxxxxxxx\n", "aws_workers-numberOfInstances": 1, "error_async": {}, "clusterName": "dev-env", "entitlements": { "license": { "schemaVersion": "v2", "version": "7", "accountID": "ACC-4BEDC698-C27E-4D61-83F6-476AD558", "accountSecret": "8a8ad535-44ce-4a43-89ad-4a80e0362349", "creationDate": "2017-02-25T22:49:11.08423Z", "expirationDate": "2018-02-25T22:49:11.08423Z", "subscriptions": { "SUB-8B0D1491-155C-4661-9827-520DDB00": { "trialOnly": false, "planID": "PRP-FA0EE7BC-FE29-4A6C-9E9C-7BB9404D", "entitlements": { "cost": 1, "software.tectonic-2016-12": 1, "software.tectonic-2016-12.free-node-count": 10 }, "serviceEnd": "2018-01-24T19:28:32Z", "productID": "PRO-8ED4A215-2F6A-4D17-8FBC-C6B0AD50", "serviceStart": "2017-01-24T19:28:32Z", "inTrial": false, "publicProductName": "CoreOS Tectonic", "planName": "tectonic-2016-12-free-v1", "duration": 1, "durationPeriod": "years", "productName": "tectonic-2016-12", "publicPlanName": "CoreOS Tectonic Free" } } }, "nodeCount": 10, "vCPUsCount": null, "bypass": false }, "aws_controllers-numberOfInstances": 1, "aws_ssh": "", "tectonicLicense": "xxxxxxxxxxxxxxxxxx", "externalETCDEnabled": false, "externalETCDClient": "", "aws_controllers-instanceType": "t2.medium", "awsVpcId": "", "osToUse": "1298.6.0", "error": { "awsSessionToken": "This field is required. You must provide a value.", "externalETCDClient": "Invalid format. You must use <host>:<port> format.", "awsSecretAccessKey": "This field is required. You must provide a value.", "awsRegion": "This field is required. You must provide a value.", "awsAccessKeyId": "This field is required. You must provide a value." }, "updater": { "server": "", "channel": "", "appID": "" }, "channelToUse": "stable", "aws_etcds-storageType": "gp2", "aws_workers-instanceType": "t2.medium", "serviceCIDR": "10.3.0.0/24", "aws_workers-storageIOPS": 1000, "clusterSubdomain": "", "adminPassword": "xxxxxxx", "awsHostedZoneId": "", "updater_enabled": true, "aws_etcds-storageSizeInGiB": 30, "aws_controllers-storageIOPS": 1000, "adminEmail": "xxxxxxxx", "platformType": "bare-metal", "awsSecretAccessKey": "", "inFly": { "awsSessionToken": false, "aws_etcds": false, "aws_workers-numberOfInstances": false, "aws_controllers-numberOfInstances": false, "tectonicLicense": false, "externalETCDEnabled": false, "externalETCDClient": false, "aws_controllers-instanceType": false, "AWSCreds": false, "aws_etcds-storageType": false, "aws_workers-instanceType": false, "aws_workers-storageIOPS": false, "aws_etcds-storageSizeInGiB": false, "aws_controllers-storageIOPS": false, "awsSecretAccessKey": false, "awsRegion": false, "aws_etcds-numberOfInstances": false, "aws_workers": false, "SelectRegionForm": false, "sts_enabled": false, "EtcdForm": false, "pullSecret": false, "aws_controllers": false, "aws_controllers-storageSizeInGiB": false, "aws_etcds-storageIOPS": false, "awsAccessKeyId": false, "aws_workers-storageSizeInGiB": false, "DefineNodesForm": false, "aws_etcds-instanceType": false, "aws_workers-storageType": false, "aws_controllers-storageType": false, "licensing": false }, "matchboxClientKey": "xxxxxxxxxxx\n", "matchboxHTTP": "192.168.xxx.31:8080", "aws_kms": "", "awsCreateVpc": true, "awsRegion": "", "masters": [ { "mac": "00:0C:29:93:24:15", "name": "bv-vm-00" } ], "ignore": { "aws_controllers-storageIOPS": true, "aws_workers-storageIOPS": true, "aws_etcds": false, "aws_etcds-storageIOPS": true, "awsSessionToken": true, "externalETCDClient": true }, "aws_etcds-numberOfInstances": 1, "sts_enabled": false, "workers": [ { "mac": "00:0c:29:84:15:c6", "name": "bv-vm-01" } ], "pullSecret": "{\n \"auths\": {\n \"quay.io\": {\n \"auth\": \"xxxxxxxxxxxxxxxxxx\",\n \"email\": \"\"\n }\n }\n}", "matchboxClientCert": "xxxxxxxxxxxxx\n", "awsTags": [ {} ], "aws_controllers-storageSizeInGiB": 30, "sshAuthorizedKeys": [ { "id": "initial-key", "key": "ssh-rsa xxxxxxxxxxxxxxxxxxxxxx" } ], "aws_etcds-storageIOPS": 1000, "caType": "self-signed", "awsAccessKeyId": "", "aws_workers-storageSizeInGiB": 30, "caCertificate": "", "podCIDR": "10.2.0.0/16", "caPrivateKey": "", "workersCount": 1, "aws_etcds-instanceType": "t2.medium", "aws_workers-storageType": "gp2", "awsWorkerSubnets": {}, "bootCfgInfly": false, "internalCluster": false, "aws_controllers-storageType": "gp2", "awsControllerSubnets": {}, "mastersCount": 1, "tectonicDomain": "bv-vm-01", "matchboxRPC": "192.168.xxx.31:8081" }, "sequence": 0 }

mfburnett commented 7 years ago

Hey @sslevil, thanks for filing this issue. Please note that we do not officially support VMware at this time, and cannot guarantee your experience. However, we are planning on adding alpha support soon - you can check on our progress here: https://github.com/coreos/tectonic-installer/blob/master/ROADMAP.md.

Meanwhile, I've pulled in @ivancherepov to help you with this issue, and he'll get back to you in a few days.

kbrwn commented 7 years ago

@sslevil This issue might be related to the kube-dns addon as there are resolution errors.

2017/03/27 16:26:49 http: Provider config sync failed, retrying in 1s: Get https://bv-vm-01/identity/.well-known/openid-configuration: dial tcp: lookup bv-vm-01 on 10.3.0.10:53: server misbehaving 2017/03/27 16:26:51 http: Provider config sync still failing, retrying in 2s: Get https://bv-vm-01/identity/.well-known/openid-configuration: dial tcp: lookup bv-vm-01 on 10.3.0.10:53: server misbehaving 

But there are other errors that make it seem like a cluster networking issue in general.

Check the status of the pods under the kube-system namespace.

robszumski commented 7 years ago

We just released a new Tectonic release with Kubernetes 1.6. Try it out and re-open if needed.