oomichi / try-kubernetes

12 stars 5 forks source link

1 e2e test failure of "[sig-node] Mount propagation [It] should propagate mounts to the host" #44

Closed oomichi closed 5 years ago

oomichi commented 6 years ago

対処方法不明、中断

~ Failure [36.791 seconds]
[k8s.io] [sig-node] Mount propagation
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:679
  should propagate mounts to the host [It]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:80

  failed to execute command in pod master, container cntr: unable to upgrade connection: container not found ("cntr")
  Expected error:
      <*errors.errorString | 0xc421af3350>: {
          s: "unable to upgrade connection: container not found (\"cntr\")",
      }
      unable to upgrade connection: container not found ("cntr")
  not to have occurred

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:104
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 20 22:37:23.707: INFO: Running AfterSuite actions on all node
Aug 20 22:37:23.708: INFO: Running AfterSuite actions on node 1

Summarizing 1 Failure:

[Fail] [k8s.io] [sig-node] Mount propagation [It] should propagate mounts to the host
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:104

Ran 1 of 999 Specs in 36.880 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 998 Skipped --- FAIL: TestE2E (36.91s)
FAIL
oomichi commented 6 years ago

失敗テストコード

100 // ExecCommandInContainer executes a command in the specified container.
101 func (f *Framework) ExecCommandInContainer(podName, containerName string, cmd ...string) string {
102         stdout, stderr, err := f.ExecCommandInContainerWithFullOutput(podName, containerName, cmd...)
103         Logf("Exec stderr: %q", stderr)
104         Expect(err).NotTo(HaveOccurred(),
105                 "failed to execute command in pod %v, container %v: %v",
106                 podName, containerName, err)
107         return stdout
108 }

ログと比較

  failed to execute command in pod master, container cntr: unable to upgrade connection: container not found ("cntr")
  Expected error:
      <*errors.errorString | 0xc421af3350>: {
          s: "unable to upgrade connection: container not found (\"cntr\")",
      }
      unable to upgrade connection: container not found ("cntr")
  not to have occurred
oomichi commented 6 years ago

master という名の Pod 内に cntr という名の Container が存在することを想定していたが、 存在しなかったため、テストが失敗した模様。

oomichi commented 6 years ago

Log

Will run 1 of 999 specs

Aug 20 22:36:46.828: INFO: >>> kubeConfig: /home/ubuntu/admin.conf
Aug 20 22:36:46.830: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Aug 20 22:36:46.863: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Aug 20 22:36:46.886: INFO: 11 / 11 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Aug 20 22:36:46.887: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Aug 20 22:36:46.890: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Aug 20 22:36:46.890: INFO: Dumping network health container logs from all nodes...
Aug 20 22:36:46.894: INFO: e2e test version: v1.11.1-2+9cefc5e2ae224a
Aug 20 22:36:46.895: INFO: kube-apiserver version: v1.11.1
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Mount propagation
  should propagate mounts to the host
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:80
[BeforeEach] [k8s.io] [sig-node] Mount propagation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug 20 22:36:46.908: INFO: >>> kubeConfig: /home/ubuntu/admin.conf
STEP: Building a namespace api object
Aug 20 22:36:46.985: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should propagate mounts to the host
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:80
Aug 20 22:37:09.174: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:e2e-tests-mount-propagation-8cr6z PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 20 22:37:09.174: INFO: >>> kubeConfig: /home/ubuntu/admin.conf
Aug 20 22:37:09.224: INFO: Exec stderr: ""
Aug 20 22:37:09.225: INFO: Getting external IP address for k8s-master
Aug 20 22:37:09.226: INFO: SSH "sudo rm -rf \"/var/lib/kubelet/e2e-tests-mount-propagation-8cr6z\"" on k8s-master(192.168.1.108:22)
Aug 20 22:37:09.226: INFO: ssh @192.168.1.108:22: command:   sudo rm -rf "/var/lib/kubelet/e2e-tests-mount-propagation-8cr6z"
Aug 20 22:37:09.226: INFO: ssh @192.168.1.108:22: stdout:    ""
Aug 20 22:37:09.227: INFO: ssh @192.168.1.108:22: stderr:    ""
Aug 20 22:37:09.227: INFO: ssh @192.168.1.108:22: exit code: 0
[AfterEach] [k8s.io] [sig-node] Mount propagation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "e2e-tests-mount-propagation-8cr6z".
STEP: Found 17 events.
Aug 20 22:37:09.234: INFO: At 2018-08-20 22:36:49 +0000 UTC - event for master: {kubelet k8s-master} Pulling: pulling image "busybox"
Aug 20 22:37:09.234: INFO: At 2018-08-20 22:36:50 +0000 UTC - event for master: {kubelet k8s-master} Created: Created container
Aug 20 22:37:09.234: INFO: At 2018-08-20 22:36:50 +0000 UTC - event for master: {kubelet k8s-master} Failed: Error: failed to start container "cntr": Error response from daemon: linux mounts: Path /var/lib/kubelet/e2e-tests-mount-propagation-8cr6z is mounted on / but it is not a shared mount.
Aug 20 22:37:09.235: INFO: At 2018-08-20 22:36:50 +0000 UTC - event for master: {kubelet k8s-master} Pulled: Successfully pulled image "busybox"
Aug 20 22:37:09.235: INFO: At 2018-08-20 22:36:55 +0000 UTC - event for slave: {kubelet k8s-master} Pulling: pulling image "busybox"
Aug 20 22:37:09.235: INFO: At 2018-08-20 22:36:56 +0000 UTC - event for slave: {kubelet k8s-master} Pulled: Successfully pulled image "busybox"
Aug 20 22:37:09.236: INFO: At 2018-08-20 22:36:56 +0000 UTC - event for slave: {kubelet k8s-master} Created: Created container
Aug 20 22:37:09.236: INFO: At 2018-08-20 22:36:56 +0000 UTC - event for slave: {kubelet k8s-master} Started: Started container
Aug 20 22:37:09.236: INFO: At 2018-08-20 22:37:01 +0000 UTC - event for private: {kubelet k8s-master} Pulling: pulling image "busybox"
Aug 20 22:37:09.236: INFO: At 2018-08-20 22:37:03 +0000 UTC - event for master: {kubelet k8s-master} BackOff: Back-off restarting failed container
Aug 20 22:37:09.237: INFO: At 2018-08-20 22:37:03 +0000 UTC - event for private: {kubelet k8s-master} Pulled: Successfully pulled image "busybox"
Aug 20 22:37:09.237: INFO: At 2018-08-20 22:37:03 +0000 UTC - event for private: {kubelet k8s-master} Created: Created container
Aug 20 22:37:09.237: INFO: At 2018-08-20 22:37:03 +0000 UTC - event for private: {kubelet k8s-master} Started: Started container
Aug 20 22:37:09.238: INFO: At 2018-08-20 22:37:07 +0000 UTC - event for default: {kubelet k8s-master} Pulling: pulling image "busybox"
Aug 20 22:37:09.238: INFO: At 2018-08-20 22:37:08 +0000 UTC - event for default: {kubelet k8s-master} Started: Started container
Aug 20 22:37:09.238: INFO: At 2018-08-20 22:37:08 +0000 UTC - event for default: {kubelet k8s-master} Created: Created container
Aug 20 22:37:09.238: INFO: At 2018-08-20 22:37:08 +0000 UTC - event for default: {kubelet k8s-master} Pulled: Successfully pulled image "busybox"
Aug 20 22:37:09.248: INFO: POD                                             NODE        PHASE    GRACE  CONDITIONS
Aug 20 22:37:09.248: INFO: standalone-cinder-provisioner-7d6594d789-9mtb9  k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  }]
Aug 20 22:37:09.248: INFO: default                                         k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-20 22:37:05 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-20 22:37:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-20 22:37:05 +0000 UTC  }]
Aug 20 22:37:09.248: INFO: master                                          k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-20 22:36:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-08-20 22:36:46 +0000 UTC ContainersNotReady containers with unready status: [cntr]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [cntr]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-20 22:36:46 +0000 UTC  }]
Aug 20 22:37:09.248: INFO: private                                         k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-20 22:36:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-20 22:37:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-20 22:36:59 +0000 UTC  }]
Aug 20 22:37:09.248: INFO: slave                                           k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-20 22:36:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-20 22:36:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-20 22:36:53 +0000 UTC  }]
Aug 20 22:37:09.248: INFO: coredns-78fcdf6894-xx76v                        k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:19 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:19 +0000 UTC  }]
Aug 20 22:37:09.248: INFO: coredns-78fcdf6894-zmpph                        k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 06:49:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 06:49:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 06:49:24 +0000 UTC  }]
Aug 20 22:37:09.248: INFO: etcd-k8s-master                                 k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  }]
Aug 20 22:37:09.248: INFO: kube-apiserver-k8s-master                       k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 01:50:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  }]
Aug 20 22:37:09.248: INFO: kube-controller-manager-k8s-master              k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 22:05:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 22:05:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 22:05:28 +0000 UTC  }]
Aug 20 22:37:09.248: INFO: kube-flannel-ds-7df6r                           k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:12:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-17 17:12:31 +0000 UTC  }]
Aug 20 22:37:09.249: INFO: kube-flannel-ds-tllws                           k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-17 09:12:53 +0000 UTC  }]
Aug 20 22:37:09.249: INFO: kube-proxy-hxp7z                                k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:17:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  }]
Aug 20 22:37:09.249: INFO: kube-proxy-zwrl4                                k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  }]
Aug 20 22:37:09.249: INFO: kube-scheduler-k8s-master                       k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 01:50:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  }]
Aug 20 22:37:09.249: INFO: metrics-server-86bd9d7667-twb2r                 k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  }]
Aug 20 22:37:09.249: INFO:
Aug 20 22:37:09.253: INFO:
Logging node info for node k8s-master
Aug 20 22:37:09.257: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master,UID:94f19db7-89e3-11e8-b234-fa163e420595,ResourceVersion:3860703,Generation:0,CreationTimestamp:2018-07-17 17:05:18 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: k8s-master,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"06:0e:73:28:c3:b1"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.108,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143394816 0} {<nil>} 4046284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038537216 0} {<nil>} 3943884Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-08-20 22:37:02 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-08-20 22:37:02 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-08-20 22:37:02 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-08-20 22:37:02 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-08-20 22:37:02 +0000 UTC 2018-07-31 23:04:27 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.108} {Hostname k8s-master}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1db2c06c39a54cd3a93a4e0a44823fd6,SystemUUID:1DB2C06C-39A5-4CD3-A93A-4E0A44823FD6,BootID:d2b66fba-cf4e-4205-b596-3ffb4e579c16,KernelVersion:4.4.0-130-generic,OSImage:Ubuntu 16.04.5 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[golang:1.10] 793901893} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[k8s.gcr.io/etcd-amd64:3.2.18] 218904307} {[k8s.gcr.io/kube-apiserver-amd64:v1.11.1] 186675825} {[k8s.gcr.io/kube-apiserver-amd64:v1.11.0] 186617744} {[k8s.gcr.io/kube-controller-manager-amd64:v1.11.1] 155252555} {[k8s.gcr.io/kube-controller-manager-amd64:v1.11.0] 155203118} {[k8s.gcr.io/nginx-slim:0.8] 110487599} {[nginx:latest] 108975101} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/kube-proxy-amd64:v1.11.0] 97772373} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[k8s.gcr.io/kube-scheduler-amd64:v1.11.1] 56781436} {[k8s.gcr.io/kube-scheduler-amd64:v1.11.0] 56757023} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3 gcr.io/kubernetes-e2e-test-images/resource-consumer:1.3] 49707607} {[quay.io/k8scsi/csi-attacher:v0.2.0] 45644524} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[quay.io/k8scsi/csi-provisioner:v0.2.1] 45078229} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[quay.io/k8scsi/driver-registrar:v0.2.0] 42385441} {[k8scloudprovider/cinder-provisioner:latest] 29292916} {[quay.io/k8scsi/hostpathplugin:v0.2.0] 17287699} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/resource-consumer/controller-amd64:1.0] 5902947} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[busybox:latest] 1162769} {[k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Aug 20 22:37:09.257: INFO:
Logging kubelet events for node k8s-master
Aug 20 22:37:09.261: INFO:
Logging pods the kubelet thinks is on node k8s-master
Aug 20 22:37:09.274: INFO: metrics-server-86bd9d7667-twb2r started at 2018-08-03 08:45:39 +0000 UTC (0+1 container statuses recorded)
Aug 20 22:37:09.274: INFO:      Container metrics-server ready: true, restart count 1
Aug 20 22:37:09.274: INFO: private started at 2018-08-20 22:36:59 +0000 UTC (0+1 container statuses recorded)
Aug 20 22:37:09.274: INFO:      Container cntr ready: true, restart count 0
Aug 20 22:37:09.274: INFO: kube-controller-manager-k8s-master started at <nil> (0+0 container statuses recorded)
Aug 20 22:37:09.274: INFO: coredns-78fcdf6894-zmpph started at 2018-08-17 06:49:24 +0000 UTC (0+1 container statuses recorded)
Aug 20 22:37:09.274: INFO:      Container coredns ready: true, restart count 0
Aug 20 22:37:09.274: INFO: kube-scheduler-k8s-master started at <nil> (0+0 container statuses recorded)
Aug 20 22:37:09.274: INFO: kube-proxy-zwrl4 started at 2018-07-31 23:08:37 +0000 UTC (0+1 container statuses recorded)
Aug 20 22:37:09.274: INFO:      Container kube-proxy ready: true, restart count 6
Aug 20 22:37:09.274: INFO: kube-flannel-ds-7df6r started at 2018-07-17 17:12:31 +0000 UTC (1+1 container statuses recorded)
Aug 20 22:37:09.274: INFO:      Init container install-cni ready: true, restart count 6
Aug 20 22:37:09.274: INFO:      Container kube-flannel ready: true, restart count 6
Aug 20 22:37:09.274: INFO: standalone-cinder-provisioner-7d6594d789-9mtb9 started at 2018-08-11 12:39:04 +0000 UTC (0+1 container statuses recorded)
Aug 20 22:37:09.274: INFO:      Container standalone-cinder-provisioner ready: true, restart count 0
Aug 20 22:37:09.274: INFO: kube-apiserver-k8s-master started at <nil> (0+0 container statuses recorded)
Aug 20 22:37:09.275: INFO: slave started at 2018-08-20 22:36:53 +0000 UTC (0+1 container statuses recorded)
Aug 20 22:37:09.275: INFO:      Container cntr ready: true, restart count 0
Aug 20 22:37:09.275: INFO: etcd-k8s-master started at <nil> (0+0 container statuses recorded)
Aug 20 22:37:09.275: INFO: master started at 2018-08-20 22:36:46 +0000 UTC (0+1 container statuses recorded)
Aug 20 22:37:09.275: INFO:      Container cntr ready: false, restart count 1
Aug 20 22:37:09.275: INFO: default started at 2018-08-20 22:37:05 +0000 UTC (0+1 container statuses recorded)
Aug 20 22:37:09.275: INFO:      Container cntr ready: true, restart count 0
Aug 20 22:37:09.275: INFO: coredns-78fcdf6894-xx76v started at 2018-08-17 09:12:19 +0000 UTC (0+1 container statuses recorded)
Aug 20 22:37:09.275: INFO:      Container coredns ready: true, restart count 0
Aug 20 22:37:09.333: INFO:
Latency metrics for node k8s-master
Aug 20 22:37:09.333: INFO:
Logging node info for node k8s-node01
Aug 20 22:37:09.340: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-node01,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-node01,UID:980d8d67-9515-11e8-a804-fa163e420595,ResourceVersion:3860719,Generation:0,CreationTimestamp:2018-07-31 23:01:01 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/zone: ,kubernetes.io/hostname: k8s-node01,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"22:57:d2:53:57:f8"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.109,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143394816 0} {<nil>} 4046284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038537216 0} {<nil>} 3943884Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-08-20 22:37:07 +0000 UTC 2018-08-10 00:17:13 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-08-20 22:37:07 +0000 UTC 2018-08-10 00:17:13 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-08-20 22:37:07 +0000 UTC 2018-08-10 00:17:13 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-08-20 22:37:07 +0000 UTC 2018-07-31 23:01:01 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-08-20 22:37:07 +0000 UTC 2018-08-10 00:17:23 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.109} {Hostname k8s-node01}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817a385b9de241668e47cd87cda24f47,SystemUUID:817A385B-9DE2-4166-8E47-CD87CDA24F47,BootID:89bf8417-1b59-4778-b31e-dcda7893ef77,KernelVersion:4.4.0-130-generic,OSImage:Ubuntu 16.04.4 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[humblec/glusterdynamic-provisioner:v1.0] 373281573} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[quay.io/kubernetes_incubator/nfs-provisioner:v1.0.9] 332415371} {[gcr.io/kubernetes-e2e-test-images/volume-nfs:0.8] 247157334} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils-amd64:1.0] 195659796} {[k8s.gcr.io/resource_consumer:beta] 132805424} {[k8s.gcr.io/nginx-slim:0.8] 110487599} {[nginx:latest] 108975101} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/kube-proxy-amd64:v1.11.0] 97772373} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3 gcr.io/kubernetes-e2e-test-images/resource-consumer:1.3] 49707607} {[quay.io/k8scsi/csi-attacher:v0.2.0] 45644524} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[quay.io/k8scsi/csi-provisioner:v0.2.1] 45078229} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[quay.io/k8scsi/driver-registrar:v0.2.0] 42385441} {[k8scloudprovider/cinder-provisioner:latest] 28582964} {[gcr.io/kubernetes-e2e-test-images/nettest-amd64:1.0] 27413498} {[quay.io/k8scsi/hostpathplugin:v0.2.0] 17287699} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/dnsutils-amd64:1.0] 9030162} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/resource-consumer/controller-amd64:1.0] 5902947} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver-amd64:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter-amd64:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness-amd64:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/fakegitserver-amd64:1.0] 4608683} {[k8s.gcr.io/k8s-dns-dnsmasq-amd64:1.14.5] 4324973} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester-amd64:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/port-forward-tester-amd64:1.0] 1992230} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user-amd64:1.0] 1450451} {[busybox:latest] 1162769} {[k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Aug 20 22:37:09.340: INFO:
Logging kubelet events for node k8s-node01
Aug 20 22:37:09.345: INFO:
Logging pods the kubelet thinks is on node k8s-node01
Aug 20 22:37:09.360: INFO: kube-proxy-hxp7z started at 2018-07-31 23:08:51 +0000 UTC (0+1 container statuses recorded)
Aug 20 22:37:09.360: INFO:      Container kube-proxy ready: true, restart count 2
Aug 20 22:37:09.360: INFO: kube-flannel-ds-tllws started at 2018-08-17 09:12:53 +0000 UTC (1+1 container statuses recorded)
Aug 20 22:37:09.360: INFO:      Init container install-cni ready: true, restart count 0
Aug 20 22:37:09.360: INFO:      Container kube-flannel ready: true, restart count 0
Aug 20 22:37:09.460: INFO:
Latency metrics for node k8s-node01
STEP: Dumping a list of prepulled images on each node...
Aug 20 22:37:09.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-mount-propagation-8cr6z" for this suite.
Aug 20 22:37:23.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 20 22:37:23.597: INFO: namespace: e2e-tests-mount-propagation-8cr6z, resource: bindings, ignored listing per whitelist
Aug 20 22:37:23.698: INFO: namespace e2e-tests-mount-propagation-8cr6z deletion completed in 14.212504551s

~ Failure [36.791 seconds]
[k8s.io] [sig-node] Mount propagation
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:679
  should propagate mounts to the host [It]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:80

  failed to execute command in pod master, container cntr: unable to upgrade connection: container not found ("cntr")
  Expected error:
      <*errors.errorString | 0xc421af3350>: {
          s: "unable to upgrade connection: container not found (\"cntr\")",
      }
      unable to upgrade connection: container not found ("cntr")
  not to have occurred

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:104
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 20 22:37:23.707: INFO: Running AfterSuite actions on all node
Aug 20 22:37:23.708: INFO: Running AfterSuite actions on node 1
oomichi commented 6 years ago

linux mounts から「Path /var/lib/kubelet/e2e-tests-mount-propagation-8cr6z は / にマウントされたが shared mount ではない」という内容のエラーが通知された

Aug 20 22:37:09.234: INFO: At 2018-08-20 22:36:50 +0000 UTC - event for master: {kubelet k8s-master} Failed: Error: failed to start container "cntr": Error response from daemon: linux mounts: Path /var/lib/kubelet/e2e-tests-mount-propagation-8cr6z is mounted on / but it is not a shared mount.
oomichi commented 6 years ago

下記の test -d でエラーになった模様

125                 podNames := []string{master.Name, slave.Name, private.Name, defaultPropagation.Name}
126                 for _, podName := range podNames {
127                         for _, dirName := range podNames {
128                                 cmd := fmt.Sprintf("test -d /mnt/test/%s", dirName)
129                                 f.ExecShellInPod(podName, cmd)
130                         }
131                 }

呼び出し

128 func (f *Framework) ExecShellInPod(podName string, cmd string) string {
129         return f.ExecCommandInPod(podName, "/bin/sh", "-c", cmd)
130 }

呼び出し

114 func (f *Framework) ExecCommandInPod(podName string, cmd ...string) string {
115         pod, err := f.PodClient().Get(podName, metav1.GetOptions{})
116         Expect(err).NotTo(HaveOccurred(), "failed to get pod")
117         Expect(pod.Spec.Containers).NotTo(BeEmpty())
118         return f.ExecCommandInContainer(podName, pod.Spec.Containers[0].Name, cmd...)
119 }

呼び出し:pod.Spec.Containersの最初のコンテナ名(ここでは cntr)で試みたが、それが見つからなかった

100 // ExecCommandInContainer executes a command in the specified container.
101 func (f *Framework) ExecCommandInContainer(podName, containerName string, cmd ...string) string {
102         stdout, stderr, err := f.ExecCommandInContainerWithFullOutput(podName, containerName, cmd...)
103         Logf("Exec stderr: %q", stderr)
104         Expect(err).NotTo(HaveOccurred(),
105                 "failed to execute command in pod %v, container %v: %v",
106                 podName, containerName, err)
107         return stdout
108 }
oomichi commented 6 years ago

失敗時の Pod の状態 コンテナ cntr が Ready になっていないことがワカル。

Aug 21 02:57:19.672: INFO: ExecCommandInPod:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{
  Name:master,GenerateName:,Namespace:e2e-tests-mount-propagation-6hpqr,SelfLink:/api/v1/namespaces/e2e-tests-mount-propagation-6hpqr/pods/master,UID:de0a2d37-a4ed-11e8-a146-fa163e420595,
  ResourceVersion:3881043,Generation:0,CreationTimestamp:2018-08-21 02:56:57 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{host {HostPathVolumeSource{Path:/var/lib/kubelet/e2e-tests-mount-propagation-6hpqr,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {default-token-v8v7d {nil nil nil nil nil &SecretVolumeSource{SecretName:default-token-v8v7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],
  Containers:[{cntr busybox [sh -c mkdir /mnt/test/master; sleep 3600] []  [] [] [] {map[] map[]} [{host false /mnt/test  0xc42264cc20} {default-token-v8v7d true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File Always SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-master,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc42264aee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc42264af00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-21 02:56:57 +0000 UTC  }
  {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-08-21 02:56:57 +0000 UTC ContainersNotReady containers with unready status: [cntr]}
  {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [cntr]}
  {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-21 02:56:57 +0000 UTC  }],
  Message:,Reason:,HostIP:192.168.1.108,PodIP:10.244.0.194,StartTime:2018-08-21 02:56:57 +0000 UTC,
  ContainerStatuses:[{cntr {ContainerStateWaiting{
    Reason:RunContainerError,Message:failed to start container "fca808f2ee54af6f10fd4698092a0a3af54de1ebacf3f6529568884e9d641124": Error response from daemon: linux mounts: Path /var/lib/kubelet/e2e-tests-mount-propagation-6hpqr is mounted on / but it is not a shared mount.,} nil nil}
  {nil nil &ContainerStateTerminated{ExitCode:128,Signal:0,Reason:ContainerCannotRun,Message:linux mounts: Path /var/lib/kubelet/e2e-tests-mount-propagation-6hpqr is mounted on / but it is not a shared mount.,StartedAt:2018-08-21 02:57:02 +0000 UTC,FinishedAt:2018-08-21 02:57:02 +0000 UTC,ContainerID:docker://fca808f2ee54af6f10fd4698092a0a3af54de1ebacf3f6529568884e9d641124,}} false 1 busybox:latest docker://sha256:e1ddd7948a1c31709a23cc5b7dfe96e55fc364f90e1cebcde0773a1b5a30dcda docker://fca808f2ee54af6f10fd4698092a0a3af54de1ebacf3f6529568884e9d641124}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

やはり下記が原因?

Reason:RunContainerError,Message:failed to start container "fca808f2ee54af6f10fd4698092a0a3af54de1ebacf3f6529568884e9d641124":
Error response from daemon: linux mounts: Path /var/lib/kubelet/e2e-tests-mount-propagation-6hpqr is
mounted on / but it is not a shared mount.
oomichi commented 6 years ago

k/kubernetes/issues/61058 の内容を把握する。

v1.9.2 から v1.10.0-beta2にアップグレードするタイミングで、hostPath を使ったコンテナが 次のように fail するようになった。

        message: 'linux mounts: Path /opt/kubelet/dev is mounted on / but it is not
          a shared or slave mount.'

このメッセージは docker daemon からきているようだ。 しかし docker daemon はアップグレードしていない。 よって、私は kubelet の振る舞い変更がこれの原因と想像している。

この問題の説明

MountPropagation feature is now beta. As a result, all volume mounts in containers are now "rslave" on
Linux by default. To make this default work in all Linux environments- you should have entire mount tree
marked as shareable via mount --make-rshared / . All Linux distributions that use systemd already have
root directory mounted as rshared and hence they need not do anything. In Linux environments without
systemd we also recommend restarting docker daemon after marking root directory as rshared
oomichi commented 6 years ago

上記と https://access.redhat.com/documentation/ja-jp/red_hat_enterprise_linux/7/html/storage_administration_guide/sect-using_the_mount_command-mounting#sect-Using_the_mount_Command-Mounting-Bind から実行ノード(k8s-master, k8s-node01)で

# mount --make-rshared /

すればよさそう。 -> mount 情報的には変わらないが・・ -> テストも変わらず失敗

$ sudo mount | grep " / "
/dev/vda1 on / type ext4 (rw,relatime,data=ordered)
$ sudo mount --make-rshared /
$ sudo mount | grep " / "
/dev/vda1 on / type ext4 (rw,relatime,data=ordered)