oomichi / try-kubernetes

12 stars 5 forks source link

Subpath [Volume type: hostPath] [It] should fail if subpath with backstepping is outside the volume #51

Closed oomichi closed 5 years ago

oomichi commented 5 years ago
[sig-storage] Subpath [Volume type: hostPath]
  should fail if subpath with backstepping is outside the volume [Slow]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:228
[BeforeEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Sep 21 22:48:27.500: INFO: >>> kubeConfig: /home/ubuntu/admin.conf
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [Volume type: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:149
STEP: Initializing hostPath volume
[It] should fail if subpath with backstepping is outside the volume [Slow]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:228
STEP: Creating pod pod-subpath-test-hostpath-lc69
STEP: Checking for subpath error event
Sep 21 22:49:27.606: INFO: Deleting pod "pod-subpath-test-hostpath-lc69" in namespace "e2e-tests-subpath-6j4pf"
Sep 21 22:49:27.646: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpath-lc69" to be fully deleted
[AfterEach] [Volume type: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:161
STEP: Deleting pod
Sep 21 22:49:27.655: INFO: Deleting pod "pod-subpath-test-hostpath-lc69" in namespace "e2e-tests-subpath-6j4pf"
STEP: Cleaning up volume
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "e2e-tests-subpath-6j4pf".
STEP: Found 0 events.
Sep 21 22:49:27.688: INFO: POD                                             NODE        PHASE    GRACE  CONDITIONS
Sep 21 22:49:27.688: INFO: standalone-cinder-provisioner-7d6594d789-9mtb9  k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  }]
Sep 21 22:49:27.688: INFO: coredns-78fcdf6894-ndw7b                        k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-20 11:02:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-20 11:02:37 +0000 UTC  }]
Sep 21 22:49:27.688: INFO: coredns-78fcdf6894-vbx59                        k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-20 11:02:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-20 11:02:37 +0000 UTC  }]
Sep 21 22:49:27.688: INFO: etcd-k8s-master                                 k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  }]
Sep 21 22:49:27.688: INFO: kube-apiserver-k8s-master                       k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  }]
Sep 21 22:49:27.688: INFO: kube-controller-manager-k8s-master              k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  }]
Sep 21 22:49:27.688: INFO: kube-flannel-ds-7df6r                           k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-17 17:12:31 +0000 UTC  }]
Sep 21 22:49:27.688: INFO: kube-flannel-ds-b8hw8                           k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-20 02:19:27 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-20 02:19:25 +0000 UTC  }]
Sep 21 22:49:27.688: INFO: kube-proxy-hxp7z                                k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  }]
Sep 21 22:49:27.688: INFO: kube-proxy-zwrl4                                k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  }]
Sep 21 22:49:27.688: INFO: kube-scheduler-k8s-master                       k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:47:57 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  }]
Sep 21 22:49:27.688: INFO: metrics-server-86bd9d7667-twb2r                 k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  }]
Sep 21 22:49:27.688: INFO:
Sep 21 22:49:27.692: INFO:
Logging node info for node k8s-master
Sep 21 22:49:27.695: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master,UID:94f19db7-89e3-11e8-b234-fa163e420595,ResourceVersion:6585810,Generation:0,CreationTimestamp:2018-07-17 17:05:18 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: k8s-master,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"56:de:cf:52:3c:5c"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.108,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143382528 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038524928 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-09-21 22:49:19 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-09-21 22:49:19 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-09-21 22:49:19 +0000 UTC 2018-09-21 21:58:19 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-09-21 22:49:19 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-09-21 22:49:19 +0000 UTC 2018-07-31 23:04:27 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.108} {Hostname k8s-master}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1db2c06c39a54cd3a93a4e0a44823fd6,SystemUUID:1DB2C06C-39A5-4CD3-A93A-4E0A44823FD6,BootID:ab0bf520-7681-4ed1-af06-15882503d093,KernelVersion:4.4.0-134-generic,OSImage:Ubuntu 16.04.5 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[golang:1.10] 793901893} {[k8s.gcr.io/etcd-amd64:3.2.18] 218904307} {[k8s.gcr.io/kube-apiserver-amd64:v1.11.1] 186675825} {[k8s.gcr.io/kube-controller-manager-amd64:v1.11.1] 155252555} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/kube-scheduler-amd64:v1.11.1] 56781436} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[k8scloudprovider/cinder-provisioner:latest] 29292916} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[busybox:latest] 1162769} {[k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Sep 21 22:49:27.695: INFO:
Logging kubelet events for node k8s-master
Sep 21 22:49:27.702: INFO:
Logging pods the kubelet thinks is on node k8s-master
Sep 21 22:49:27.719: INFO: etcd-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 21 22:49:27.719: INFO: kube-apiserver-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 21 22:49:27.719: INFO: kube-controller-manager-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 21 22:49:27.719: INFO: kube-flannel-ds-7df6r started at 2018-07-17 17:12:31 +0000 UTC (1+1 container statuses recorded)
Sep 21 22:49:27.719: INFO:      Init container install-cni ready: true, restart count 10
Sep 21 22:49:27.719: INFO:      Container kube-flannel ready: true, restart count 10
Sep 21 22:49:27.719: INFO: metrics-server-86bd9d7667-twb2r started at 2018-08-03 08:45:39 +0000 UTC (0+1 container statuses recorded)
Sep 21 22:49:27.719: INFO:      Container metrics-server ready: true, restart count 5
Sep 21 22:49:27.719: INFO: standalone-cinder-provisioner-7d6594d789-9mtb9 started at 2018-08-11 12:39:04 +0000 UTC (0+1 container statuses recorded)
Sep 21 22:49:27.719: INFO:      Container standalone-cinder-provisioner ready: true, restart count 5
Sep 21 22:49:27.719: INFO: coredns-78fcdf6894-vbx59 started at 2018-09-20 11:02:37 +0000 UTC (0+1 container statuses recorded)
Sep 21 22:49:27.719: INFO:      Container coredns ready: true, restart count 4
Sep 21 22:49:27.719: INFO: kube-scheduler-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 21 22:49:27.719: INFO: kube-proxy-zwrl4 started at 2018-07-31 23:08:37 +0000 UTC (0+1 container statuses recorded)
Sep 21 22:49:27.719: INFO:      Container kube-proxy ready: true, restart count 10
Sep 21 22:49:27.759: INFO:
Latency metrics for node k8s-master
Sep 21 22:49:27.759: INFO:
Logging node info for node k8s-node01
Sep 21 22:49:27.762: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-node01,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-node01,UID:980d8d67-9515-11e8-a804-fa163e420595,ResourceVersion:6585815,Generation:0,CreationTimestamp:2018-07-31 23:01:01 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/zone: ,kubernetes.io/hostname: k8s-node01,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"06:d1:cf:44:a1:30"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.109,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143382528 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038524928 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-09-21 22:49:24 +0000 UTC 2018-09-21 22:28:27 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-09-21 22:49:24 +0000 UTC 2018-09-21 22:28:27 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-09-21 22:49:24 +0000 UTC 2018-09-21 22:28:27 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-09-21 22:49:24 +0000 UTC 2018-07-31 23:01:01 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-09-21 22:49:24 +0000 UTC 2018-09-21 22:28:27 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.109} {Hostname k8s-node01}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817a385b9de241668e47cd87cda24f47,SystemUUID:817A385B-9DE2-4166-8E47-CD87CDA24F47,BootID:110bcdbc-2e2d-4fa2-afe3-3c832ccf29ec,KernelVersion:4.4.0-134-generic,OSImage:Ubuntu 16.04.4 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[humblec/glusterdynamic-provisioner:v1.0] 373281573} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[quay.io/kubernetes_incubator/nfs-provisioner:v1.0.9] 332415371} {[gcr.io/kubernetes-e2e-test-images/volume-nfs:0.8] 247157334} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils-amd64:1.0] 195659796} {[k8s.gcr.io/resource_consumer:beta] 132805424} {[k8s.gcr.io/nginx-slim:0.8] 110487599} {[nginx:latest] 109057266} {[<none>@<none> <none>:<none>] 108975101} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/kube-proxy-amd64:v1.11.0] 97772373} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3 gcr.io/kubernetes-e2e-test-images/resource-consumer:1.3] 49707607} {[quay.io/k8scsi/csi-attacher:v0.2.0] 45644524} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[quay.io/k8scsi/csi-provisioner:v0.2.1] 45078229} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[quay.io/k8scsi/driver-registrar:v0.2.0] 42385441} {[k8scloudprovider/cinder-provisioner:latest] 28582964} {[gcr.io/kubernetes-e2e-test-images/nettest-amd64:1.0] 27413498} {[quay.io/k8scsi/hostpathplugin:v0.2.0] 17287699} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/dnsutils-amd64:1.0] 9030162} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/resource-consumer/controller-amd64:1.0] 5902947} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver-amd64:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter-amd64:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness-amd64:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/fakegitserver-amd64:1.0] 4608683} {[k8s.gcr.io/k8s-dns-dnsmasq-amd64:1.14.5] 4324973} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester-amd64:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/port-forward-tester-amd64:1.0] 1992230} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user-amd64:1.0] 1450451} {[busybox:latest] 1162769} {[k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Sep 21 22:49:27.762: INFO:
Logging kubelet events for node k8s-node01
Sep 21 22:49:27.773: INFO:
Logging pods the kubelet thinks is on node k8s-node01
Sep 21 22:49:27.796: INFO: coredns-78fcdf6894-ndw7b started at 2018-09-20 11:02:37 +0000 UTC (0+1 container statuses recorded)
Sep 21 22:49:27.796: INFO:      Container coredns ready: true, restart count 1
Sep 21 22:49:27.796: INFO: kube-proxy-hxp7z started at 2018-07-31 23:08:51 +0000 UTC (0+1 container statuses recorded)
Sep 21 22:49:27.796: INFO:      Container kube-proxy ready: true, restart count 4
Sep 21 22:49:27.796: INFO: kube-flannel-ds-b8hw8 started at 2018-09-20 02:19:25 +0000 UTC (1+1 container statuses recorded)
Sep 21 22:49:27.797: INFO:      Init container install-cni ready: true, restart count 1
Sep 21 22:49:27.797: INFO:      Container kube-flannel ready: true, restart count 1
Sep 21 22:49:27.849: INFO:
Latency metrics for node k8s-node01
STEP: Dumping a list of prepulled images on each node...
Sep 21 22:49:27.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-6j4pf" for this suite.
Sep 21 22:49:33.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 21 22:49:33.973: INFO: namespace: e2e-tests-subpath-6j4pf, resource: bindings, ignored listing per whitelist
Sep 21 22:49:34.041: INFO: namespace e2e-tests-subpath-6j4pf deletion completed in 6.165596551s

~ Failure [66.542 seconds]
[sig-storage] Subpath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Volume type: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:148
    should fail if subpath with backstepping is outside the volume [Slow] [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:228

    no events found
    Expected
        <int>: 0
    not to equal
        <int>: 0

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:529
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSep 21 22:49:34.043: INFO: Running AfterSuite actions on all node
Sep 21 22:49:34.043: INFO: Running AfterSuite actions on node 1

Summarizing 1 Failure:

[Fail] [sig-storage] Subpath [Volume type: hostPath] [It] should fail if subpath with backstepping is outside the volume [Slow]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:529
oomichi commented 5 years ago

テストコード

 509 func testPodFailSubpathError(f *framework.Framework, pod *v1.Pod, errorMsg string) {
 510         By(fmt.Sprintf("Creating pod %s", pod.Name))
 511         pod, err := f.ClientSet.CoreV1().Pods(f.Namespace.Name).Create(pod)
 512         Expect(err).ToNot(HaveOccurred(), "while creating pod")
 513         defer func() {
 514                 framework.DeletePodWithWait(f, f.ClientSet, pod)
 515         }()
 516         err = framework.WaitTimeoutForPodRunningInNamespace(f.ClientSet, pod.Name, pod.Namespace, time.Minute)
 517         Expect(err).To(HaveOccurred(), "while waiting for pod to be running")
 518
 519         By("Checking for subpath error event")
 520         selector := fields.Set{
 521                 "involvedObject.kind":      "Pod",
 522                 "involvedObject.name":      pod.Name,
 523                 "involvedObject.namespace": f.Namespace.Name,
 524                 "reason":                   "Failed",
 525         }.AsSelector().String()
 526         options := metav1.ListOptions{FieldSelector: selector}
 527         events, err := f.ClientSet.CoreV1().Events(f.Namespace.Name).List(options)
 528         Expect(err).NotTo(HaveOccurred(), "while getting pod events")
 529         Expect(len(events.Items)).NotTo(Equal(0), "no events found")
★ここでエラーになった。

 530         Expect(events.Items[0].Message).To(ContainSubstring(errorMsg), fmt.Sprintf("%q error not found", errorMsg))
 531 }
..
 228                         It("should fail if subpath with backstepping is outside the volume [Slow]", func() {
 229                                 // Create the subpath outside the volume
 230                                 setInitCommand(pod, fmt.Sprintf("ln -s ../ %s", subPathDir))
 231
 232                                 // Pod should fail
 233                                 testPodFailSubpath(f, pod)
 234                         })
..
 505 func testPodFailSubpath(f *framework.Framework, pod *v1.Pod) {
 506         testPodFailSubpathError(f, pod, "subPath")
 507 }
oomichi commented 5 years ago

下記の単独実行だと成功する。 3回中3回とも成功

$ go run hack/e2e.go -- --provider=skeleton --test --test_args="--ginkgo.focus=hostPath\]\sshould\sfail\sif\ssubpath\swith\sbackstepping\sis\soutside\sthe\svolume" --check-version-skew=false

うーん、さっき再現できた下記の方法でも成功するようになってしまった。

$ go run hack/e2e.go -- --provider=skeleton --test --test_args="--ginkgo.focus=should\sfail\sif\ssubpath\swith\sbackstepping\sis\soutside\sthe\svolume" --check-version-skew=false

全実行では再度失敗した。

oomichi commented 5 years ago

テストの期待シナリオを把握する。

  1. initContainerの実行コマンドとして下記を設定 ln -s ../ /test-volume/e2e-tests-subpath-mrbdh
  2. Pod を作成
  3. Pod が Running 状態になるのを待つ
  4. Kind: Pod, Name: Pod名, Namespace: Namespace名、Reason: Failed で List Event API を呼び出し
  5. List Event APIの戻り値が 0 でないことを確認する ここでエラーになった

なぜ、Failed の Event が発生することを期待しているのか?

initContainer が実行するコマンドを設定 ln コマンド

oomichi commented 5 years ago

hostPath で問題が発生する理由を調べる

hostPath: https://kubernetes.io/docs/concepts/storage/volumes/#hostpath


hostPath は host node の filesystem を Pod に直接マウントする方法だ。
これは多くの Pod が必要とするようなものではないが、いくつかのアプリにとってパワフルな回避手段となるだろう。
たとえば、hostPath のユースケースとして考えられるものとして
- 稼働中のコンテナが Docker internal にアクセスする必要がある場合、/var/lib/docker の hostPath を使う。
- Container 内におけるcAdvisor が稼動している場合、/sys の hostPath を使う。
- Pod が稼動する前に与えられた hostPath が存在するという条件を Pod 作成時に指定できる。
oomichi commented 5 years ago

期待しているエラーメッセージは

events.Items[0].Message: Error: failed to prepare subPath for volumeMount "test-volume" of container "test-container-subpath-hostpath-74mf"

出力場所は pkg/kubelet/kubelet_pods.go

 202                         hostPath, cleanupAction, err = mounter.PrepareSafeSubpath(mountutil.Subpath{
 203                                 VolumeMountIndex: i,
 204                                 Path:             hostPath,
 205                                 VolumeName:       vol.InnerVolumeSpecName,
 206                                 VolumePath:       volumePath,
 207                                 PodDir:           podDir,
 208                                 ContainerName:    container.Name,
 209                         })
 210                         if err != nil {
 211                                 // Don't pass detailed error back to the user because it could give information about host filesystem
 212                                 glog.Errorf("failed to prepare subPath for volumeMount %q of container %q: %v", mount.Name, container.Name, err)
 213                                 return nil, cleanupAction, fmt.Errorf("failed to prepare subPath for volumeMount %q of container %q", mount.Name, contai     ner.Name)
 214                         }

syslog に出力されていたメッセージ

Sep 25 01:01:26 localhost kubelet[1040]: E0925 01:01:26.052532    1040 kubelet_pods.go:212] failed to prepare subPath for volumeMount "test-volume" of container "test-container-subpath-hostpath-74mf": subpath "/" not within volume path "/tmp"
oomichi commented 5 years ago

そもそも subPath って何?

https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath

ときどき、一つの Pod 内部での複数(のコンテナによる)利用のため、1つのボリュームを
共有したくなることがある。volumeMounts.subPath プロパティは ボリューム内部のルートパス
の代わりに sub-path を指定できる。

つまり Pod の仕様が重要

 152                                 subPath = f.Namespace.Name
 [..]
 157                                 pod = testPodSubpath(f, subPath, curVolType, volInfo.source)
...
 356 func testPodSubpath(f *framework.Framework, subpath, volumeType string, source *v1.VolumeSource) *v1.Pod {
 [..]
 388                         Containers: []v1.Container{
 389                                 {
 390                                         Name:  fmt.Sprintf("test-container-subpath-%s", suffix),
 391                                         Image: mountImage,
 392                                         VolumeMounts: []v1.VolumeMount{
 393                                                 {
 394                                                         Name:      volumeName,
 395                                                         MountPath: volumePath,
 396                                                         SubPath:   subpath,

つまり、Pod のコンテナ test-container-subpath-xxxxxx では SubPath が namespace 名で hostPath のボリューム(ホストの /tmp)をマウントポイント /test-volume にマウントする。 その際のエラーメッセージが

subpath "/" not within volume path "/tmp"

となっているが、この / は下記の initContainer 用コマンドからきている?

ln -s ../ /test-volume/e2e-tests-subpath-xxxxxxx
oomichi commented 5 years ago

initContainer用コマンドをはずしたときにテストが失敗することを確認する。 → 5回中5回失敗することを確認した。 → はずさない場合は5回中5回成功することを確認した。

--- a/test/e2e/storage/subpath.go
+++ b/test/e2e/storage/subpath.go
@@ -227,7 +227,7 @@ var _ = utils.SIGDescribe("Subpath", func() {

                        It("should fail if subpath with backstepping is outside the volume [Slow]", func() {
                                // Create the subpath outside the volume
-                               setInitCommand(pod, fmt.Sprintf("ln -s ../ %s", subPathDir))
+                               // setInitCommand(pod, fmt.Sprintf("ln -s ../ %s", subPathDir))

                                // Pod should fail
                                testPodFailSubpath(f, pod)
$
$ go run hack/e2e.go -- --provider=skeleton --test --test_args="--ginkgo.focus=hostPath\]\sshould\sfail\sif\ssubpath\swith\sbackstepping\sis\soutside\sthe\svolume" --check-version-skew=false
oomichi commented 5 years ago

emptyDir で発生したときの all events

STEP: All events: &EventList{ListMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ListMeta{
   SelfLink:/api/v1/namespaces/e2e-tests-subpath-xxlpc/events,ResourceVersion:6942061,Continue:,},
   Items:[],}

ほかの volumeType でパスしたときの all events

STEP: All events: &EventList{ListMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ListMeta{
   SelfLink:/api/v1/namespaces/e2e-tests-subpath-fvhm2/events,ResourceVersion:6942377,Continue:,},
   Items:[{
      { }
      {pod-subpath-test-nfs-78dj.1557bdb6e2fbdca9  e2e-tests-subpath-fvhm2 /api/v1/namespaces/e2e-tests-subpath-fvhm2/events/pod-subpath-test-nfs-78dj.1557bdb6e2fbdca9 6cd0d22a-c0ff-11e8-844e-fa163e420595 6942277 0 2018-09-25 20:13:10 +0000 UTC <nil> <nil> map[] map[] [] nil [] }
      {Pod e2e-tests-subpath-fvhm2 pod-subpath-test-nfs-78dj 6ccae949-c0ff-11e8-844e-fa163e420595 v1 6942275 }
      Scheduled Successfully assigned e2e-tests-subpath-fvhm2/pod-subpath-test-nfs-78dj to k8s-node01
      {default-scheduler }
      2018-09-25 20:13:10 +0000 UTC 2018-09-25 20:13:10 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil
    }
    {
      { }
      {pod-subpath-test-nfs-78dj.1557bdb78bbc002b  e2e-tests-subpath-fvhm2 /api/v1/namespaces/e2e-tests-subpath-fvhm2/events/pod-subpath-test-nfs-78dj.1557bdb78bbc002b 6e824dfa-c0ff-11e8-844e-fa163e420595 6942282 0 2018-09-25 20:13:13 +0000 UTC <nil> <nil> map[] map[] [] nil [] }
      {Pod e2e-tests-subpath-fvhm2 pod-subpath-test-nfs-78dj 6ccae949-c0ff-11e8-844e-fa163e420595 v1 6942276 spec.initContainers{init-volume-nfs-78dj}}
      Pulling pulling image "busybox"
      {kubelet k8s-node01}
      2018-09-25 20:13:13 +0000 UTC 2018-09-25 20:13:13 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil
    }
    {
      { }
      {pod-subpath-test-nfs-78dj.1557bdb7e52f4d0a  e2e-tests-subpath-fvhm2 /api/v1/namespaces/e2e-tests-subpath-fvhm2/events/pod-subpath-test-nfs-78dj.1557bdb7e52f4d0a 6f667f9c-c0ff-11e8-844e-fa163e420595 6942286 0 2018-09-25 20:13:15 +0000 UTC <nil> <nil> map[] map[] [] nil [] }
      {Pod e2e-tests-subpath-fvhm2 pod-subpath-test-nfs-78dj 6ccae949-c0ff-11e8-844e-fa163e420595 v1 6942276 spec.initContainers{init-volume-nfs-78dj}}
      Pulled Successfully pulled image "busybox"
      {kubelet k8s-node01}
      2018-09-25 20:13:15 +0000 UTC 2018-09-25 20:13:15 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil
    }
    {
      { }
      {pod-subpath-test-nfs-78dj.1557bdb7e968fcdf  e2e-tests-subpath-fvhm2 /api/v1/namespaces/e2e-tests-subpath-fvhm2/events/pod-subpath-test-nfs-78dj.1557bdb7e968fcdf 6f70debf-c0ff-11e8-844e-fa163e420595 6942287 0 2018-09-25 20:13:15 +0000 UTC <nil> <nil> map[] map[] [] nil [] } {Pod e2e-tests-subpath-fvhm2 pod-subpath-test-nfs-78dj 6ccae949-c0ff-11e8-844e-fa163e420595 v1 6942276 spec.initContainers{init-volume-nfs-78dj}} Created Created container {kubelet k8s-node01} 2018-09-25 20:13:15 +0000 UTC 2018-09-25 20:13:15 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {pod-subpath-test-nfs-78dj.1557bdb7f0808230  e2e-tests-subpath-fvhm2 /api/v1/namespaces/e2e-tests-subpath-fvhm2/events/pod-subpath-test-nfs-78dj.1557bdb7f0808230 6f830fda-c0ff-11e8-844e-fa163e420595 6942288 0 2018-09-25 20:13:15 +0000 UTC <nil> <nil> map[] map[] [] nil [] } {Pod e2e-tests-subpath-fvhm2 pod-subpath-test-nfs-78dj 6ccae949-c0ff-11e8-844e-fa163e420595 v1 6942276 spec.initContainers{init-volume-nfs-78dj}} Started Started container {kubelet k8s-node01} 2018-09-25 20:13:15 +0000 UTC 2018-09-25 20:13:15 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {pod-subpath-test-nfs-78dj.1557bdb81321d79f  e2e-tests-subpath-fvhm2 /api/v1/namespaces/e2e-tests-subpath-fvhm2/events/pod-subpath-test-nfs-78dj.1557bdb81321d79f 6fdc67f1-c0ff-11e8-844e-fa163e420595 6942360 0 2018-09-25 20:13:16 +0000 UTC <nil> <nil> map[] map[] [] nil [] } {Pod e2e-tests-subpath-fvhm2 pod-subpath-test-nfs-78dj 6ccae949-c0ff-11e8-844e-fa163e420595 v1 6942276 spec.containers{test-container-subpath-nfs-78dj}} Pulled Container image "gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0" already present on machine {kubelet k8s-node01} 2018-09-25 20:13:16 +0000 UTC 2018-09-25 20:13:58 +0000 UTC 6 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {pod-subpath-test-nfs-78dj.1557bdb81350a463  e2e-tests-subpath-fvhm2 /api/v1/namespaces/e2e-tests-subpath-fvhm2/events/pod-subpath-test-nfs-78dj.1557bdb81350a463 6fded5cf-c0ff-11e8-844e-fa163e420595 6942361 0 2018-09-25 20:13:16 +0000 UTC <nil> <nil> map[] map[] [] nil [] } {Pod e2e-tests-subpath-fvhm2 pod-subpath-test-nfs-78dj 6ccae949-c0ff-11e8-844e-fa163e420595 v1 6942276 spec.containers{test-container-subpath-nfs-78dj}} Failed Error: failed to prepare subPath for volumeMount "test-volume" of container "test-container-subpath-nfs-78dj" {kubelet k8s-node01} 2018-09-25 20:13:16 +0000 UTC 2018-09-25 20:13:58 +0000 UTC 6 Warning 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {pod-subpath-test-nfs-78dj.1557bdb813d37ac2  e2e-tests-subpath-fvhm2 /api/v1/namespaces/e2e-tests-subpath-fvhm2/events/pod-subpath-test-nfs-78dj.1557bdb813d37ac2 6fe0981a-c0ff-11e8-844e-fa163e420595 6942292 0 2018-09-25 20:13:16 +0000 UTC <nil> <nil> map[] map[] [] nil [] } {Pod e2e-tests-subpath-fvhm2 pod-subpath-test-nfs-78dj 6ccae949-c0ff-11e8-844e-fa163e420595 v1 6942276 spec.containers{test-container-volume-nfs-78dj}} Pulled Container image "gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0" already present on machine {kubelet k8s-node01} 2018-09-25 20:13:16 +0000 UTC 2018-09-25 20:13:16 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {pod-subpath-test-nfs-78dj.1557bdb817d9fdaf  e2e-tests-subpath-fvhm2 /api/v1/namespaces/e2e-tests-subpath-fvhm2/events/pod-subpath-test-nfs-78dj.1557bdb817d9fdaf 6fe7b316-c0ff-11e8-844e-fa163e420595 6942293 0 2018-09-25 20:13:16 +0000 UTC <nil> <nil> map[] map[] [] nil [] } {Pod e2e-tests-subpath-fvhm2 pod-subpath-test-nfs-78dj 6ccae949-c0ff-11e8-844e-fa163e420595 v1 6942276 spec.containers{test-container-volume-nfs-78dj}} Created Created container {kubelet k8s-node01} 2018-09-25 20:13:16 +0000 UTC 2018-09-25 20:13:16 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  } {{ } {pod-subpath-test-nfs-78dj.1557bdb8224f8e1e  e2e-tests-subpath-fvhm2 /api/v1/namespaces/e2e-tests-subpath-fvhm2/events/pod-subpath-test-nfs-78dj.1557bdb8224f8e1e 7002c8f9-c0ff-11e8-844e-fa163e420595 6942294 0 2018-09-25 20:13:16 +0000 UTC <nil> <nil> map[] map[] [] nil [] } {Pod e2e-tests-subpath-fvhm2 pod-subpath-test-nfs-78dj 6ccae949-c0ff-11e8-844e-fa163e420595 v1 6942276 spec.containers{test-container-volume-nfs-78dj}} Started Started container {kubelet k8s-node01} 2018-09-25 20:13:16 +0000 UTC 2018-09-25 20:13:16 +0000 UTC 1 Normal 0001-01-01 00:00:00 +0000 UTC nil  nil  }],}

上記2つを比較すると、最初の emptyDir には Pod がスケジューリングされたイベントすら含まれていない。 Pod のステータスを確認する必要あり。

oomichi commented 5 years ago

下記のシーケンスで Running 状態待ちを行うと思われる

 511         pod, err := f.ClientSet.CoreV1().Pods(f.Namespace.Name).Create(pod)
 512         Expect(err).ToNot(HaveOccurred(), "while creating pod")
 513         defer func() {
 514                 framework.DeletePodWithWait(f, f.ClientSet, pod)
 515         }()
 516         err = framework.WaitTimeoutForPodRunningInNamespace(f.ClientSet, pod.Name, pod.Namespace, time.Minute)
 517         Expect(err).To(HaveOccurred(), "while waiting for pod to be running")

下記の関数はちゃんと動作するのか調べる?

k8s.io/apimachinery/pkg/util/wait
///
1420 func WaitTimeoutForPodRunningInNamespace(c clientset.Interface, podName, namespace string, timeout time.Duration) error {
1421         return wait.PollImmediate(Poll, timeout, podRunning(c, podName, namespace))
1422 }
1423
1424 func podRunning(c clientset.Interface, podName, namespace string) wait.ConditionFunc {
1425         return func() (bool, error) {
1426                 pod, err := c.CoreV1().Pods(namespace).Get(podName, metav1.GetOptions{})
1427                 if err != nil {
1428                         return false, err
1429                 }
1430                 switch pod.Status.Phase {
1431                 case v1.PodRunning:
1432                         return true, nil
1433                 case v1.PodFailed, v1.PodSucceeded:
1434                         return false, conditions.ErrPodCompleted
1435                 }
1436                 return false, nil
1437         }
1438 }

vendor/k8s.io/apimachinery/pkg/util/wait/wait.go

230 // PollImmediate tries a condition func until it returns true, an error, or the timeout
231 // is reached.
232 //
233 // Poll always checks 'condition' before waiting for the interval. 'condition'
234 // will always be invoked at least once.
235 //
236 // Some intervals may be missed if the condition takes too long or the time
237 // window is too short.
238 //
239 // If you want to Poll something forever, see PollInfinite.
240 func PollImmediate(interval, timeout time.Duration, condition ConditionFunc) error {
241         return pollImmediateInternal(poller(interval, timeout), condition)
242 }
243
244 func pollImmediateInternal(wait WaitFunc, condition ConditionFunc) error {
245         done, err := condition()
246         if err != nil {
247                 return err
248         }
249         if done {
250                 return nil
251         }
252         return pollInternal(wait, condition)
253 }
oomichi commented 5 years ago

emptyDir で再現 Pod の状態 Pending になっている・・・

STEP: Pod: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{
    Name:pod-subpath-test-emptydir-2msm,
    GenerateName:,Namespace:e2e-tests-subpath-nfbp5,
    SelfLink:/api/v1/namespaces/e2e-tests-subpath-nfbp5/pods/pod-subpath-test-emptydir-2msm,
    UID:fe5815ce-c109-11e8-844e-fa163e420595,
    ResourceVersion:6947301,
    Generation:0,
    CreationTimestamp:2018-09-25 21:28:50 +0000 UTC,
    DeletionTimestamp:<nil>,
    DeletionGracePeriodSeconds:nil,
    Labels:map[string]string{},
    Annotations:map[string]string{},OwnerReferences:[],
    Finalizers:[],ClusterName:,Initializers:nil,},
    Spec:PodSpec{Volumes:[{test-volume {nil EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {liveness-probe-volume {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {default-token-sm5mf {nil nil nil nil nil &SecretVolumeSource{SecretName:default-token-sm5mf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{test-container-subpath-emptydir-2msm gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 [] []  [] [] [] {map[] map[]} [{test-volume false /test-volume e2e-tests-subpath-nfbp5 <nil>} {liveness-probe-volume false /probe-volume  <nil>} {default-token-sm5mf true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false} {test-container-volume-emptydir-2msm gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 [] []  [] [] [] {map[] map[]} [{test-volume false /test-volume  <nil>} {liveness-probe-volume false /probe-volume  <nil>} {default-token-sm5mf true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],RestartPolicy:Never,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c0,c1,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[{init-volume-emptydir-2msm busybox [/bin/sh -ec ln -s ../ /test-volume/e2e-tests-subpath-nfbp5] []  [] [] [] {map[] map[]} [{test-volume false /test-volume  <nil>} {liveness-probe-volume false /probe-volume  <nil>} {default-token-sm5mf true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File Always SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc420961bb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc420961dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},
Status:PodStatus{
    Phase:Pending,
    Conditions:[],
    Message:,
    Reason:,
    HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[],
    QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,
},}
oomichi commented 5 years ago

hostPath でのエラー

[sig-storage] Subpath [Volume type: hostPath]
  should fail if subpath with backstepping is outside the volume [Slow]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:228
[BeforeEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Sep 25 21:29:56.535: INFO: >>> kubeConfig: /home/ubuntu/admin.conf
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [Volume type: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:149
STEP: Initializing hostPath volume
[It] should fail if subpath with backstepping is outside the volume [Slow]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:228
STEP: Creating pod pod-subpath-test-hostpath-9nh7
STEP: Checking for subpath error event
STEP: All events: &EventList{ListMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ListMeta{SelfLink:/api/v1/namespaces/e2e-tests-subpath-jxlks/events,ResourceVersion:6947415,Continue:,},Items:[],}
STEP: Pod: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:pod-subpath-test-hostpath-9nh7,GenerateName:,Namespace:e2e-tests-subpath-jxlks,SelfLink:/api/v1/namespaces/e2e-tests-subpath-jxlks/pods/pod-subpath-test-hostpath-9nh7,UID:25fd13b9-c10a-11e8-844e-fa163e420595,ResourceVersion:6947368,Generation:0,CreationTimestamp:2018-09-25 21:29:56 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{test-volume {HostPathVolumeSource{Path:/tmp,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {liveness-probe-volume {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {default-token-nr2d9 {nil nil nil nil nil &SecretVolumeSource{SecretName:default-token-nr2d9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{test-container-subpath-hostpath-9nh7 gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 [] []  [] [] [] {map[] map[]} [{test-volume false /test-volume e2e-tests-subpath-jxlks <nil>} {liveness-probe-volume false /probe-volume  <nil>} {default-token-nr2d9 true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false} {test-container-volume-hostpath-9nh7 gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 [] []  [] [] [] {map[] map[]} [{test-volume false /test-volume  <nil>} {liveness-probe-volume false /probe-volume  <nil>} {default-token-nr2d9 true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],RestartPolicy:Never,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c0,c1,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[{init-volume-hostpath-9nh7 busybox [/bin/sh -ec ln -s ../ /test-volume/e2e-tests-subpath-jxlks] []  [] [] [] {map[] map[]} [{test-volume false /test-volume  <nil>} {liveness-probe-volume false /probe-volume  <nil>} {default-token-nr2d9 true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File Always SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc4207d1f40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc4206f6000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 25 21:30:56.630: INFO: Deleting pod "pod-subpath-test-hostpath-9nh7" in namespace "e2e-tests-subpath-jxlks"
Sep 25 21:30:56.646: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpath-9nh7" to be fully deleted
[AfterEach] [Volume type: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:161
STEP: Deleting pod
Sep 25 21:30:56.651: INFO: Deleting pod "pod-subpath-test-hostpath-9nh7" in namespace "e2e-tests-subpath-jxlks"
STEP: Cleaning up volume
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "e2e-tests-subpath-jxlks".
STEP: Found 0 events.
Sep 25 21:30:56.666: INFO: POD                                             NODE        PHASE    GRACE  CONDITIONS
Sep 25 21:30:56.666: INFO: standalone-cinder-provisioner-7d6594d789-9mtb9  k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  }]
Sep 25 21:30:56.667: INFO: coredns-78fcdf6894-8pv6t                        k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 14:34:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 14:34:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 14:34:13 +0000 UTC  }]
Sep 25 21:30:56.667: INFO: coredns-78fcdf6894-zlwt9                        k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 14:34:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 14:34:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 14:34:13 +0000 UTC  }]
Sep 25 21:30:56.667: INFO: etcd-k8s-master                                 k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  }]
Sep 25 21:30:56.667: INFO: kube-apiserver-k8s-master                       k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  }]
Sep 25 21:30:56.667: INFO: kube-controller-manager-k8s-master              k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  }]
Sep 25 21:30:56.667: INFO: kube-flannel-ds-7df6r                           k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-17 17:12:31 +0000 UTC  }]
Sep 25 21:30:56.667: INFO: kube-flannel-ds-j4fhd                           k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 07:10:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 07:10:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 07:10:39 +0000 UTC  }]
Sep 25 21:30:56.667: INFO: kube-proxy-hxp7z                                k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  }]
Sep 25 21:30:56.667: INFO: kube-proxy-zwrl4                                k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  }]
Sep 25 21:30:56.667: INFO: kube-scheduler-k8s-master                       k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-09-25 21:28:47 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  }]
Sep 25 21:30:56.667: INFO: metrics-server-86bd9d7667-twb2r                 k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  }]
Sep 25 21:30:56.667: INFO:
Sep 25 21:30:56.673: INFO:
Logging node info for node k8s-master
Sep 25 21:30:56.676: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master,UID:94f19db7-89e3-11e8-b234-fa163e420595,ResourceVersion:6947414,Generation:0,CreationTimestamp:2018-07-17 17:05:18 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: k8s-master,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"56:de:cf:52:3c:5c"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.108,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143382528 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038524928 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-09-25 21:30:53 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-09-25 21:30:53 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-09-25 21:30:53 +0000 UTC 2018-09-21 21:58:19 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-09-25 21:30:53 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-09-25 21:30:53 +0000 UTC 2018-07-31 23:04:27 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.108} {Hostname k8s-master}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1db2c06c39a54cd3a93a4e0a44823fd6,SystemUUID:1DB2C06C-39A5-4CD3-A93A-4E0A44823FD6,BootID:ab0bf520-7681-4ed1-af06-15882503d093,KernelVersion:4.4.0-134-generic,OSImage:Ubuntu 16.04.5 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[golang:1.10] 793901893} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[k8s.gcr.io/etcd-amd64:3.2.18] 218904307} {[k8s.gcr.io/kube-apiserver-amd64:v1.11.1] 186675825} {[k8s.gcr.io/kube-controller-manager-amd64:v1.11.1] 155252555} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[k8s.gcr.io/kube-scheduler-amd64:v1.11.1] 56781436} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3] 49707607} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[k8scloudprovider/cinder-provisioner:latest] 29292916} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[busybox:latest] 1162769} {[k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Sep 25 21:30:56.676: INFO:
Logging kubelet events for node k8s-master
Sep 25 21:30:56.679: INFO:
Logging pods the kubelet thinks is on node k8s-master
Sep 25 21:30:56.685: INFO: metrics-server-86bd9d7667-twb2r started at 2018-08-03 08:45:39 +0000 UTC (0+1 container statuses recorded)
Sep 25 21:30:56.685: INFO:      Container metrics-server ready: true, restart count 5
Sep 25 21:30:56.685: INFO: coredns-78fcdf6894-zlwt9 started at 2018-09-22 14:34:13 +0000 UTC (0+1 container statuses recorded)
Sep 25 21:30:56.685: INFO:      Container coredns ready: true, restart count 0
Sep 25 21:30:56.685: INFO: kube-proxy-zwrl4 started at 2018-07-31 23:08:37 +0000 UTC (0+1 container statuses recorded)
Sep 25 21:30:56.685: INFO:      Container kube-proxy ready: true, restart count 10
Sep 25 21:30:56.685: INFO: standalone-cinder-provisioner-7d6594d789-9mtb9 started at 2018-08-11 12:39:04 +0000 UTC (0+1 container statuses recorded)
Sep 25 21:30:56.685: INFO:      Container standalone-cinder-provisioner ready: true, restart count 5
Sep 25 21:30:56.685: INFO: etcd-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 25 21:30:56.685: INFO: kube-controller-manager-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 25 21:30:56.685: INFO: kube-flannel-ds-7df6r started at 2018-07-17 17:12:31 +0000 UTC (1+1 container statuses recorded)
Sep 25 21:30:56.685: INFO:      Init container install-cni ready: true, restart count 10
Sep 25 21:30:56.685: INFO:      Container kube-flannel ready: true, restart count 10
Sep 25 21:30:56.685: INFO: kube-apiserver-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 25 21:30:56.685: INFO: kube-scheduler-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 25 21:30:56.732: INFO:
Latency metrics for node k8s-master
Sep 25 21:30:56.732: INFO:
Logging node info for node k8s-node01
Sep 25 21:30:56.737: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-node01,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-node01,UID:980d8d67-9515-11e8-a804-fa163e420595,ResourceVersion:6947411,Generation:0,CreationTimestamp:2018-07-31 23:01:01 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/zone: ,kubernetes.io/hostname: k8s-node01,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"06:d1:cf:44:a1:30"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.109,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143382528 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038524928 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-09-25 21:30:51 +0000 UTC 2018-09-21 22:28:27 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-09-25 21:30:51 +0000 UTC 2018-09-21 22:28:27 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-09-25 21:30:51 +0000 UTC 2018-09-21 22:28:27 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-09-25 21:30:51 +0000 UTC 2018-07-31 23:01:01 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-09-25 21:30:51 +0000 UTC 2018-09-21 22:28:27 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.109} {Hostname k8s-node01}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817a385b9de241668e47cd87cda24f47,SystemUUID:817A385B-9DE2-4166-8E47-CD87CDA24F47,BootID:110bcdbc-2e2d-4fa2-afe3-3c832ccf29ec,KernelVersion:4.4.0-134-generic,OSImage:Ubuntu 16.04.4 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[humblec/glusterdynamic-provisioner:v1.0] 373281573} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[quay.io/kubernetes_incubator/nfs-provisioner:v1.0.9] 332415371} {[gcr.io/kubernetes-e2e-test-images/volume-nfs:0.8] 247157334} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils-amd64:1.0] 195659796} {[k8s.gcr.io/resource_consumer:beta] 132805424} {[k8s.gcr.io/nginx-slim:0.8] 110487599} {[nginx:latest] 109057266} {[<none>@<none> <none>:<none>] 108975101} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/kube-proxy-amd64:v1.11.0] 97772373} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3 gcr.io/kubernetes-e2e-test-images/resource-consumer:1.3] 49707607} {[quay.io/k8scsi/csi-attacher:v0.2.0] 45644524} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[quay.io/k8scsi/csi-provisioner:v0.2.1] 45078229} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[quay.io/k8scsi/driver-registrar:v0.2.0] 42385441} {[k8scloudprovider/cinder-provisioner:latest] 28582964} {[gcr.io/kubernetes-e2e-test-images/nettest-amd64:1.0] 27413498} {[quay.io/k8scsi/hostpathplugin:v0.2.0] 17287699} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/dnsutils-amd64:1.0] 9030162} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/resource-consumer/controller-amd64:1.0] 5902947} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver-amd64:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter-amd64:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness-amd64:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/fakegitserver-amd64:1.0] 4608683} {[k8s.gcr.io/k8s-dns-dnsmasq-amd64:1.14.5] 4324973} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester-amd64:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/port-forward-tester-amd64:1.0] 1992230} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user-amd64:1.0] 1450451} {[busybox:latest] 1162769} {[k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Sep 25 21:30:56.737: INFO:
Logging kubelet events for node k8s-node01
Sep 25 21:30:56.741: INFO:
Logging pods the kubelet thinks is on node k8s-node01
Sep 25 21:30:56.751: INFO: kube-proxy-hxp7z started at 2018-07-31 23:08:51 +0000 UTC (0+1 container statuses recorded)
Sep 25 21:30:56.751: INFO:      Container kube-proxy ready: true, restart count 4
Sep 25 21:30:56.751: INFO: coredns-78fcdf6894-8pv6t started at 2018-09-22 14:34:13 +0000 UTC (0+1 container statuses recorded)
Sep 25 21:30:56.751: INFO:      Container coredns ready: true, restart count 0
Sep 25 21:30:56.751: INFO: kube-flannel-ds-j4fhd started at 2018-09-22 07:10:39 +0000 UTC (1+1 container statuses recorded)
Sep 25 21:30:56.751: INFO:      Init container install-cni ready: true, restart count 0
Sep 25 21:30:56.751: INFO:      Container kube-flannel ready: true, restart count 0
Sep 25 21:30:56.864: INFO:
Latency metrics for node k8s-node01
STEP: Dumping a list of prepulled images on each node...
Sep 25 21:30:56.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-jxlks" for this suite.
Sep 25 21:31:02.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 25 21:31:03.104: INFO: namespace: e2e-tests-subpath-jxlks, resource: bindings, ignored listing per whitelist
Sep 25 21:31:03.115: INFO: namespace e2e-tests-subpath-jxlks deletion completed in 6.220946134s

~ Failure [66.581 seconds]
[sig-storage] Subpath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Volume type: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:148
    should fail if subpath with backstepping is outside the volume [Slow] [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:228

    no events found
    Expected
        <int>: 0
    not to equal
        <int>: 0

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:542
------------------------------
oomichi commented 5 years ago
Pod: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:pod-subpath-test-hostpath-9nh7,GenerateName:,Namespace:e2e-tests-subpath-jxlks,SelfLink:/api/v1/namespaces/e2e-tests-subpath-jxlks/pods/pod-subpath-test-hostpath-9nh7,UID:25fd13b9-c10a-11e8-844e-fa163e420595,ResourceVersion:6947368,Generation:0,CreationTimestamp:2018-09-25 21:29:56 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{test-volume {HostPathVolumeSource{Path:/tmp,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {liveness-probe-volume {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {default-token-nr2d9 {nil nil nil nil nil &SecretVolumeSource{SecretName:default-token-nr2d9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{test-container-subpath-hostpath-9nh7 gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 [] []  [] [] [] {map[] map[]} [{test-volume false /test-volume e2e-tests-subpath-jxlks <nil>} {liveness-probe-volume false /probe-volume  <nil>} {default-token-nr2d9 true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false} {test-container-volume-hostpath-9nh7 gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 [] []  [] [] [] {map[] map[]} [{test-volume false /test-volume  <nil>} {liveness-probe-volume false /probe-volume  <nil>} {default-token-nr2d9 true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],RestartPolicy:Never,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c0,c1,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[{init-volume-hostpath-9nh7 busybox [/bin/sh -ec ln -s ../ /test-volume/e2e-tests-subpath-jxlks] []  [] [] [] {map[] map[]} [{test-volume false /test-volume  <nil>} {liveness-probe-volume false /probe-volume  <nil>} {default-token-nr2d9 true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File Always SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc4207d1f40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc4206f6000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},
Status:PodStatus{
    Phase:Pending,
    Conditions:[],
    Message:,
    Reason:,
    HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[],
    QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,
},}
oomichi commented 5 years ago

結局 WaitTimeoutForPodRunningInNamespace 呼び出しはテスト上は Running になることを 待っているのではなく、Pending のままタイムアウト(1分間)するのを待っている模様。 タイムアウトが発生しても WaitTimeoutForPodRunningInNamespace 自体はエラーを返さない。

oomichi commented 5 years ago

ログを採取 → 失敗時もちゃんと1分間のタイムアウト待ちは行われているので、WaitTimeoutForPodRunningInNamespace の問題ではない。 → Event API に全くイベントが登録されていないことが問題。

[sig-storage] Subpath [Volume type: hostPath]
  should fail if subpath with backstepping is outside the volume [Slow]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:228
[BeforeEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Sep 26 00:03:07.490: INFO: >>> kubeConfig: /home/ubuntu/admin.conf
STEP: Building a namespace api object
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [Volume type: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:149
STEP: Initializing hostPath volume
[It] should fail if subpath with backstepping is outside the volume [Slow]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:228
STEP: Creating pod pod-subpath-test-hostpath-rnxd
Sep 26 00:03:07.586: INFO: pod.Status.Phase: Pending
Sep 26 00:03:09.596: INFO: pod.Status.Phase: Pending
Sep 26 00:03:11.598: INFO: pod.Status.Phase: Pending
Sep 26 00:03:13.598: INFO: pod.Status.Phase: Pending
Sep 26 00:03:15.598: INFO: pod.Status.Phase: Pending
Sep 26 00:03:17.598: INFO: pod.Status.Phase: Pending
Sep 26 00:03:19.598: INFO: pod.Status.Phase: Pending
Sep 26 00:03:21.599: INFO: pod.Status.Phase: Pending
Sep 26 00:03:23.598: INFO: pod.Status.Phase: Pending
Sep 26 00:03:25.591: INFO: pod.Status.Phase: Pending
Sep 26 00:03:27.599: INFO: pod.Status.Phase: Pending
Sep 26 00:03:29.600: INFO: pod.Status.Phase: Pending
Sep 26 00:03:31.593: INFO: pod.Status.Phase: Pending
Sep 26 00:03:33.590: INFO: pod.Status.Phase: Pending
Sep 26 00:03:35.590: INFO: pod.Status.Phase: Pending
Sep 26 00:03:37.597: INFO: pod.Status.Phase: Pending
Sep 26 00:03:39.603: INFO: pod.Status.Phase: Pending
Sep 26 00:03:41.599: INFO: pod.Status.Phase: Pending
Sep 26 00:03:43.598: INFO: pod.Status.Phase: Pending
Sep 26 00:03:45.597: INFO: pod.Status.Phase: Pending
Sep 26 00:03:47.602: INFO: pod.Status.Phase: Pending
Sep 26 00:03:49.599: INFO: pod.Status.Phase: Pending
Sep 26 00:03:51.602: INFO: pod.Status.Phase: Pending
Sep 26 00:03:53.604: INFO: pod.Status.Phase: Pending
Sep 26 00:03:55.598: INFO: pod.Status.Phase: Pending
Sep 26 00:03:57.600: INFO: pod.Status.Phase: Pending
Sep 26 00:03:59.598: INFO: pod.Status.Phase: Pending
Sep 26 00:04:01.600: INFO: pod.Status.Phase: Pending
Sep 26 00:04:03.597: INFO: pod.Status.Phase: Pending
Sep 26 00:04:05.599: INFO: pod.Status.Phase: Pending
Sep 26 00:04:07.601: INFO: pod.Status.Phase: Pending
Sep 26 00:04:07.606: INFO: pod.Status.Phase: Pending
STEP: Checking for subpath error event
STEP: All events: &EventList{ListMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ListMeta{SelfLink:/api/v1/namespaces/e2e-tests-subpath-p7762/events,ResourceVersion:6957771,Continue:,},Items:[],}
STEP: Pod: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:pod-subpath-test-hostpath-rnxd,GenerateName:,Namespace:e2e-tests-subpath-p7762,SelfLink:/api/v1/namespaces/e2e-tests-subpath-p7762/pods/pod-subpath-test-hostpath-rnxd,UID:8c4084b2-c11f-11e8-844e-fa163e420595,ResourceVersion:6957721,Generation:0,CreationTimestamp:2018-09-26 00:03:07 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{test-volume {HostPathVolumeSource{Path:/tmp,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {liveness-probe-volume {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {default-token-mjmwc {nil nil nil nil nil &SecretVolumeSource{SecretName:default-token-mjmwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{test-container-subpath-hostpath-rnxd gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 [] []  [] [] [] {map[] map[]} [{test-volume false /test-volume e2e-tests-subpath-p7762 <nil>} {liveness-probe-volume false /probe-volume  <nil>} {default-token-mjmwc true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false} {test-container-volume-hostpath-rnxd gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0 [] []  [] [] [] {map[] map[]} [{test-volume false /test-volume  <nil>} {liveness-probe-volume false /probe-volume  <nil>} {default-token-mjmwc true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],RestartPolicy:Never,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:&SELinuxOptions{User:,Role:,Type:,Level:s0:c0,c1,},RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[{init-volume-hostpath-rnxd busybox [/bin/sh -ec ln -s ../ /test-volume/e2e-tests-subpath-p7762] []  [] [] [] {map[] map[]} [{test-volume false /test-volume  <nil>} {liveness-probe-volume false /probe-volume  <nil>} {default-token-mjmwc true /var/run/secrets/kubernetes.io/serviceaccount  <nil>}] [] nil nil nil /dev/termination-log File Always SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc42099c520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc42099c540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Sep 26 00:04:07.620: INFO: Deleting pod "pod-subpath-test-hostpath-rnxd" in namespace "e2e-tests-subpath-p7762"
Sep 26 00:04:07.633: INFO: Wait up to 5m0s for pod "pod-subpath-test-hostpath-rnxd" to be fully deleted
[AfterEach] [Volume type: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:161
STEP: Deleting pod
Sep 26 00:04:07.637: INFO: Deleting pod "pod-subpath-test-hostpath-rnxd" in namespace "e2e-tests-subpath-p7762"
STEP: Cleaning up volume
[AfterEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "e2e-tests-subpath-p7762".
STEP: Found 0 events.
Sep 26 00:04:07.654: INFO: POD                                             NODE        PHASE    GRACE  CONDITIONS
Sep 26 00:04:07.654: INFO: standalone-cinder-provisioner-7d6594d789-9mtb9  k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  }]
Sep 26 00:04:07.654: INFO: coredns-78fcdf6894-8pv6t                        k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 14:34:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 14:34:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 14:34:13 +0000 UTC  }]
Sep 26 00:04:07.654: INFO: coredns-78fcdf6894-zlwt9                        k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 14:34:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 14:34:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 14:34:13 +0000 UTC  }]
Sep 26 00:04:07.654: INFO: etcd-k8s-master                                 k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  }]
Sep 26 00:04:07.654: INFO: kube-apiserver-k8s-master                       k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  }]
Sep 26 00:04:07.654: INFO: kube-controller-manager-k8s-master              k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  }]
Sep 26 00:04:07.654: INFO: kube-flannel-ds-7df6r                           k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-17 17:12:31 +0000 UTC  }]
Sep 26 00:04:07.654: INFO: kube-flannel-ds-j4fhd                           k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 07:10:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 07:10:41 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-22 07:10:39 +0000 UTC  }]
Sep 26 00:04:07.654: INFO: kube-proxy-hxp7z                                k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  }]
Sep 26 00:04:07.654: INFO: kube-proxy-zwrl4                                k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  }]
Sep 26 00:04:07.654: INFO: kube-scheduler-k8s-master                       k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-09-26 00:03:07 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  }]
Sep 26 00:04:07.654: INFO: metrics-server-86bd9d7667-twb2r                 k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  }]
Sep 26 00:04:07.654: INFO:
Sep 26 00:04:07.658: INFO:
Logging node info for node k8s-master
Sep 26 00:04:07.662: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master,UID:94f19db7-89e3-11e8-b234-fa163e420595,ResourceVersion:6957769,Generation:0,CreationTimestamp:2018-07-17 17:05:18 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: k8s-master,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"56:de:cf:52:3c:5c"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.108,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143382528 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038524928 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-09-26 00:04:05 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-09-26 00:04:05 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-09-26 00:04:05 +0000 UTC 2018-09-21 21:58:19 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-09-26 00:04:05 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-09-26 00:04:05 +0000 UTC 2018-07-31 23:04:27 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.108} {Hostname k8s-master}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1db2c06c39a54cd3a93a4e0a44823fd6,SystemUUID:1DB2C06C-39A5-4CD3-A93A-4E0A44823FD6,BootID:ab0bf520-7681-4ed1-af06-15882503d093,KernelVersion:4.4.0-134-generic,OSImage:Ubuntu 16.04.5 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[golang:1.10] 793901893} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[k8s.gcr.io/etcd-amd64:3.2.18] 218904307} {[k8s.gcr.io/kube-apiserver-amd64:v1.11.1] 186675825} {[k8s.gcr.io/kube-controller-manager-amd64:v1.11.1] 155252555} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[k8s.gcr.io/kube-scheduler-amd64:v1.11.1] 56781436} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3] 49707607} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[k8scloudprovider/cinder-provisioner:latest] 29292916} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[busybox:latest] 1162769} {[k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Sep 26 00:04:07.662: INFO:
Logging kubelet events for node k8s-master
Sep 26 00:04:07.667: INFO:
Logging pods the kubelet thinks is on node k8s-master
Sep 26 00:04:07.674: INFO: etcd-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 26 00:04:07.674: INFO: kube-controller-manager-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 26 00:04:07.674: INFO: kube-flannel-ds-7df6r started at 2018-07-17 17:12:31 +0000 UTC (1+1 container statuses recorded)
Sep 26 00:04:07.674: INFO:      Init container install-cni ready: true, restart count 10
Sep 26 00:04:07.674: INFO:      Container kube-flannel ready: true, restart count 10
Sep 26 00:04:07.674: INFO: kube-apiserver-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 26 00:04:07.674: INFO: kube-scheduler-k8s-master started at <nil> (0+0 container statuses recorded)
Sep 26 00:04:07.674: INFO: metrics-server-86bd9d7667-twb2r started at 2018-08-03 08:45:39 +0000 UTC (0+1 container statuses recorded)
Sep 26 00:04:07.674: INFO:      Container metrics-server ready: true, restart count 5
Sep 26 00:04:07.674: INFO: coredns-78fcdf6894-zlwt9 started at 2018-09-22 14:34:13 +0000 UTC (0+1 container statuses recorded)
Sep 26 00:04:07.674: INFO:      Container coredns ready: true, restart count 0
Sep 26 00:04:07.674: INFO: kube-proxy-zwrl4 started at 2018-07-31 23:08:37 +0000 UTC (0+1 container statuses recorded)
Sep 26 00:04:07.674: INFO:      Container kube-proxy ready: true, restart count 10
Sep 26 00:04:07.674: INFO: standalone-cinder-provisioner-7d6594d789-9mtb9 started at 2018-08-11 12:39:04 +0000 UTC (0+1 container statuses recorded)
Sep 26 00:04:07.674: INFO:      Container standalone-cinder-provisioner ready: true, restart count 5
Sep 26 00:04:07.704: INFO:
Latency metrics for node k8s-master
Sep 26 00:04:07.704: INFO:
Logging node info for node k8s-node01
Sep 26 00:04:07.708: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-node01,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-node01,UID:980d8d67-9515-11e8-a804-fa163e420595,ResourceVersion:6957770,Generation:0,CreationTimestamp:2018-07-31 23:01:01 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/zone: ,kubernetes.io/hostname: k8s-node01,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"06:d1:cf:44:a1:30"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.109,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143382528 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038524928 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-09-26 00:04:06 +0000 UTC 2018-09-21 22:28:27 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-09-26 00:04:06 +0000 UTC 2018-09-21 22:28:27 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-09-26 00:04:06 +0000 UTC 2018-09-21 22:28:27 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-09-26 00:04:06 +0000 UTC 2018-07-31 23:01:01 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-09-26 00:04:06 +0000 UTC 2018-09-21 22:28:27 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.109} {Hostname k8s-node01}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817a385b9de241668e47cd87cda24f47,SystemUUID:817A385B-9DE2-4166-8E47-CD87CDA24F47,BootID:110bcdbc-2e2d-4fa2-afe3-3c832ccf29ec,KernelVersion:4.4.0-134-generic,OSImage:Ubuntu 16.04.4 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[humblec/glusterdynamic-provisioner:v1.0] 373281573} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[quay.io/kubernetes_incubator/nfs-provisioner:v1.0.9] 332415371} {[gcr.io/kubernetes-e2e-test-images/volume-nfs:0.8] 247157334} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils-amd64:1.0] 195659796} {[k8s.gcr.io/resource_consumer:beta] 132805424} {[k8s.gcr.io/nginx-slim:0.8] 110487599} {[nginx:latest] 109057266} {[<none>@<none> <none>:<none>] 108975101} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/kube-proxy-amd64:v1.11.0] 97772373} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3 gcr.io/kubernetes-e2e-test-images/resource-consumer:1.3] 49707607} {[quay.io/k8scsi/csi-attacher:v0.2.0] 45644524} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[quay.io/k8scsi/csi-provisioner:v0.2.1] 45078229} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[quay.io/k8scsi/driver-registrar:v0.2.0] 42385441} {[k8scloudprovider/cinder-provisioner:latest] 28582964} {[gcr.io/kubernetes-e2e-test-images/nettest-amd64:1.0] 27413498} {[quay.io/k8scsi/hostpathplugin:v0.2.0] 17287699} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/dnsutils-amd64:1.0] 9030162} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/resource-consumer/controller-amd64:1.0] 5902947} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver-amd64:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter-amd64:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness-amd64:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/fakegitserver-amd64:1.0] 4608683} {[k8s.gcr.io/k8s-dns-dnsmasq-amd64:1.14.5] 4324973} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester-amd64:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/port-forward-tester-amd64:1.0] 1992230} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user-amd64:1.0] 1450451} {[busybox:latest] 1162769} {[k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Sep 26 00:04:07.708: INFO:
Logging kubelet events for node k8s-node01
Sep 26 00:04:07.712: INFO:
Logging pods the kubelet thinks is on node k8s-node01
Sep 26 00:04:07.724: INFO: kube-proxy-hxp7z started at 2018-07-31 23:08:51 +0000 UTC (0+1 container statuses recorded)
Sep 26 00:04:07.724: INFO:      Container kube-proxy ready: true, restart count 4
Sep 26 00:04:07.724: INFO: coredns-78fcdf6894-8pv6t started at 2018-09-22 14:34:13 +0000 UTC (0+1 container statuses recorded)
Sep 26 00:04:07.724: INFO:      Container coredns ready: true, restart count 0
Sep 26 00:04:07.724: INFO: kube-flannel-ds-j4fhd started at 2018-09-22 07:10:39 +0000 UTC (1+1 container statuses recorded)
Sep 26 00:04:07.724: INFO:      Init container install-cni ready: true, restart count 0
Sep 26 00:04:07.724: INFO:      Container kube-flannel ready: true, restart count 0
Sep 26 00:04:07.769: INFO:
Latency metrics for node k8s-node01
STEP: Dumping a list of prepulled images on each node...
Sep 26 00:04:07.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-p7762" for this suite.
Sep 26 00:04:13.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep 26 00:04:13.976: INFO: namespace: e2e-tests-subpath-p7762, resource: bindings, ignored listing per whitelist
Sep 26 00:04:14.001: INFO: namespace e2e-tests-subpath-p7762 deletion completed in 6.223743115s

~ Failure [66.511 seconds]
[sig-storage] Subpath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Volume type: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:148
    should fail if subpath with backstepping is outside the volume [Slow] [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:228

    no events found
    Expected
        <int>: 0
    not to equal
        <int>: 0

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:542
------------------------------
oomichi commented 5 years ago

正常なPodのStatus

Status:PodStatus{
    Phase:Pending,
    Conditions:[
        {Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-25 23:45:50 +0000 UTC  }
        {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-09-25 23:45:45 +0000 UTC ContainersNotReady containers with unready status: [test-container-subpath-hostpath-ztg9 test-container-volume-hostpath-ztg9]}
        {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [test-container-subpath-hostpath-ztg9 test-container-volume-hostpath-ztg9]}
        {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-25 23:45:45 +0000 UTC  }
    ],
    Message:,
    Reason:,
    HostIP:192.168.1.109,
    PodIP:10.244.1.68,
    StartTime:2018-09-25 23:45:45 +0000 UTC,
    ContainerStatuses:[
        {
            test-container-subpath-hostpath-ztg9 {
                ContainerStateWaiting{Reason:CreateContainerConfigError,Message:failed to prepare subPath for volumeMount "test-volume" of container "test-container-subpath-hostpath-ztg9",} nil nil
            }
            {nil nil nil} false 0 gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0
        }
        {
            test-container-volume-hostpath-ztg9 {
                nil nil &ContainerStateTerminated{
                    ExitCode:0,
                    Signal:0,
                    Reason:Completed,
                    Message:,StartedAt:2018-09-25 23:45:50 +0000 UTC,FinishedAt:2018-09-25 23:45:50 +0000 UTC,ContainerID:docker://15408f4cc63d4f59982b7dc8d79673344f633a14b6b4c76db13828efbc925c66,
                }
            } {nil nil nil} false 0 gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0
            docker://sha256:e61457c2c65d4c706226ea86659b67046ed10b1ecfd06f163ede92912f0beae5 docker://15408f4cc63d4f59982b7dc8d79673344f633a14b6b4c76db13828efbc925c66
        }
    ],
    QOSClass:BestEffort,
    InitContainerStatuses:[
        {
            init-volume-hostpath-ztg9 {
                nil nil
                ContainerStateTerminated{
                    ExitCode:0,
                    Signal:0,
                    Reason:Completed,
                    Message:,
                    StartedAt:2018-09-25 23:45:49 +0000 UTC,
                    FinishedAt:2018-09-25 23:45:50 +0000 UTC,
                    ContainerID:docker://e7c69cd099964a3847b2e3ddc1a9613b77d76a18da6110202ab0183f47cb6851,
                }
            } {nil nil nil} true 0 busybox:latest docker://sha256:e1ddd7948a1c31709a23cc5b7dfe96e55fc364f90e1cebcde0773a1b5a30dcda docker://e7c69cd099964a3847b2e3ddc1a9613b77d76a18da6110202ab0183f47cb6851
        }
    ],
    NominatedNodeName:,
}

異常なPodのStatus

Status:PodStatus{
    Phase:Pending,
    Conditions:[],
    Message:,
    Reason:,
    HostIP:,
    PodIP:,
    StartTime:<nil>,
    ContainerStatuses:[],
    QOSClass:BestEffort,
    InitContainerStatuses:[],
    NominatedNodeName:,
}

まったくスケジュールされていないことがわかる。 k8s-schedulerのログを確認する。 → まったくログがない・・

oomichi commented 5 years ago

テストが失敗するパターンは? k8s-master にスケジュールされる場合?

k8s-master のみ環境

oomichi commented 5 years ago

そもそも、なんでテスト実施時に k8s-scheduler の起動待ちが発生するんだ?

Sep 26 23:52:23.134: INFO: 10 / 11 pods in namespace 'kube-system' are running and ready (152 seconds elapsed)
Sep 26 23:52:23.134: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Sep 26 23:52:23.134: INFO: POD                        NODE        PHASE    GRACE  CONDITIONS
Sep 26 23:52:23.134: INFO: kube-scheduler-k8s-master  k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2018-09-26 23:47:18 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-09-21 22:28:11 +0000 UTC  }]
Sep 26 23:52:23.134: INFO:
Sep 26 23:52:25.116: INFO: The status of Pod kube-scheduler-k8s-master is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed

そのときの scheduler Pod の状態

$ kubectl describe pod kube-scheduler-k8s-master -n kube-system
Name:               kube-scheduler-k8s-master
Namespace:          kube-system
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               k8s-master/192.168.1.108
Start Time:         Fri, 21 Sep 2018 22:28:11 +0000
Labels:             component=kube-scheduler
                    tier=control-plane
Annotations:        kubernetes.io/config.hash=eea6acc984051f7a71e306c455684ada
                    kubernetes.io/config.mirror=eea6acc984051f7a71e306c455684ada
                    kubernetes.io/config.seen=2018-08-22T02:30:48.136114511Z
                    kubernetes.io/config.source=file
                    scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 192.168.1.108
Containers:
  kube-scheduler:
    Container ID:  docker://5d51d5f5f8721543c452366615761c4ccd6229178f6a95159b98b072c5a80f09
    Image:         k8s.gcr.io/kube-scheduler-amd64:v1.11.1
    Image ID:      docker://sha256:272b3a60cd68df630383440192659a313582d35d0d615b4fc0afad9650a8612d
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-scheduler
      --address=192.168.1.108
      --kubeconfig=/etc/kubernetes/scheduler.conf
      --leader-elect=true
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Wed, 26 Sep 2018 23:45:47 +0000
      Finished:     Wed, 26 Sep 2018 23:47:17 +0000
    Ready:          False
    Restart Count:  12762
    Requests:
      cpu:        100m
    Liveness:     http-get http://127.0.0.1:10251/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/scheduler.conf
    HostPathType:  FileOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:            <none>

CrashLoopBackOff: Pod内のコンテナプロセスの終了を検知してコンテナの再起動を繰り返している。

oomichi commented 5 years ago

テスト中に kube-scheduler-k8s-master の再起動が発生しているから!? → あっていそう、kube-scheduler-k8s-master が

kube-scheduler-k8s-master            0/1       CrashLoopBackOff   12768      35d

のときにテストが失敗している。 シングル実行の場合、kube-scheduler-k8s-master を含めた Pod の存在チェックが事前に入るから問題が発生しないっぽい。

oomichi commented 5 years ago

kube-scheduler-k8s-master の再起動原因を調べる。 ログは何も出していない・・・

$ sudo cat kube-scheduler-k8s-master_kube-system_kube-scheduler-55db404d86bc734f11c087cca3faa47bca1ed79fa36c5d1aa5825b8e7d963744.log
{"log":"I0927 00:10:17.340090       1 server.go:126] Version: v1.11.1\n","stream":"stderr","time":"2018-09-27T00:10:17.340916816Z"}
{"log":"W0927 00:10:17.343198       1 authorization.go:47] Authorization is disabled\n","stream":"stderr","time":"2018-09-27T00:10:17.343340749Z"}
{"log":"W0927 00:10:17.343246       1 authentication.go:55] Authentication is disabled\n","stream":"stderr","time":"2018-09-27T00:10:17.343367205Z"}
{"log":"I0927 00:10:17.343413       1 insecure_serving.go:47] Serving healthz insecurely on 192.168.1.108:10251\n","stream":"stderr","time":"2018-09-27T00:10:17.34348187Z"}
{"log":"I0927 00:10:18.248174       1 controller_utils.go:1025] Waiting for caches to sync for scheduler controller\n","stream":"stderr","time":"2018-09-27T00:10:18.248595552Z"}
{"log":"I0927 00:10:18.349345       1 controller_utils.go:1032] Caches are synced for scheduler controller\n","stream":"stderr","time":"2018-09-27T00:10:18.349649673Z"}
{"log":"I0927 00:10:18.349434       1 leaderelection.go:185] attempting to acquire leader lease  kube-system/kube-scheduler...\n","stream":"stderr","time":"2018-09-27T00:10:18.349714273Z"}
{"log":"I0927 00:10:36.283908       1 leaderelection.go:194] successfully acquired lease kube-system/kube-scheduler\n","stream":"stderr","time":"2018-09-27T00:10:36.284022717Z"}
$ sudo cat kube-scheduler-k8s-master_kube-system_kube-scheduler-773276fec0be862d78abcd1f86e8d52819a1b40f6915a6194c8e5ae896a83536.log
{"log":"I0927 00:16:59.173860       1 server.go:126] Version: v1.11.1\n","stream":"stderr","time":"2018-09-27T00:16:59.174032201Z"}
{"log":"W0927 00:16:59.175933       1 authorization.go:47] Authorization is disabled\n","stream":"stderr","time":"2018-09-27T00:16:59.176277584Z"}
{"log":"W0927 00:16:59.175952       1 authentication.go:55] Authentication is disabled\n","stream":"stderr","time":"2018-09-27T00:16:59.176301984Z"}
{"log":"I0927 00:16:59.175962       1 insecure_serving.go:47] Serving healthz insecurely on 192.168.1.108:10251\n","stream":"stderr","time":"2018-09-27T00:16:59.176308189Z"}
{"log":"I0927 00:17:00.082079       1 controller_utils.go:1025] Waiting for caches to sync for scheduler controller\n","stream":"stderr","time":"2018-09-27T00:17:00.08222978Z"}
{"log":"I0927 00:17:00.182534       1 controller_utils.go:1032] Caches are synced for scheduler controller\n","stream":"stderr","time":"2018-09-27T00:17:00.18379388Z"}
{"log":"I0927 00:17:00.182739       1 leaderelection.go:185] attempting to acquire leader lease  kube-system/kube-scheduler...\n","stream":"stderr","time":"2018-09-27T00:17:00.1839
oomichi commented 5 years ago

うーむ、環境自体が怪しくなっている。 https://github.com/kubernetes/kubernetes/releases/tag/v1.11.3 で再構築してテストしたほうが良さそう。 v1.11.3 をクリーンインストール、kube-scheduler は以下のように正常

$ kubectl get pods -n kube-system
NAME                                 READY     STATUS    RESTARTS   AGE
coredns-78fcdf6894-2h9l4             1/1       Running   0          18m
coredns-78fcdf6894-876qz             1/1       Running   0          18m
etcd-113-master                      1/1       Running   0          17m
kube-apiserver-113-master            1/1       Running   0          17m
kube-controller-manager-113-master   1/1       Running   0          17m
kube-flannel-ds-6xkgf                1/1       Running   0          9m
kube-flannel-ds-fqwrc                1/1       Running   0          17m
kube-proxy-2plw2                     1/1       Running   0          9m
kube-proxy-7bbrt                     1/1       Running   0          18m
kube-scheduler-113-master            1/1       Running   0          17m
metrics-server-84bb557798-8svs7      1/1       Running   0          7m
oomichi commented 5 years ago

上記、環境をクリーン構築することで解消した。