oomichi / try-kubernetes

12 stars 5 forks source link

Skip Disruptive e2e tests #41

Closed oomichi closed 6 years ago

oomichi commented 6 years ago
[Fail] [sig-storage] Subpath [Volume type: emptyDir] [It] should unmount if pod is force deleted while kubelet is down [Disruptive][Slow]
[Fail] [sig-storage] Subpath [Volume type: emptyDir] [It] should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow]

まとめ

oomichi commented 6 years ago
STEP: Collecting events from namespace "e2e-tests-subpath-jppnz".
STEP: Found 15 events.
Aug 15 02:45:23.912: INFO: At 2018-08-15 02:44:39 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {default-scheduler } Scheduled: Successfully assigned e2e-tests-subpath-jppnz/pod-subpath-test-emptydir-rx4j to k8s-node01
Aug 15 02:45:23.912: INFO: At 2018-08-15 02:44:41 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {kubelet k8s-node01} Pulling: pulling image "busybox"
Aug 15 02:45:23.912: INFO: At 2018-08-15 02:44:43 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {kubelet k8s-node01} Pulled: Successfully pulled image "busybox"
Aug 15 02:45:23.913: INFO: At 2018-08-15 02:44:43 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {kubelet k8s-node01} Created: Created container
Aug 15 02:45:23.913: INFO: At 2018-08-15 02:44:43 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {kubelet k8s-node01} Started: Started container
Aug 15 02:45:23.913: INFO: At 2018-08-15 02:44:44 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {kubelet k8s-node01} Pulling: pulling image "busybox"
Aug 15 02:45:23.913: INFO: At 2018-08-15 02:44:46 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {kubelet k8s-node01} Pulling: pulling image "busybox"
Aug 15 02:45:23.913: INFO: At 2018-08-15 02:44:46 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {kubelet k8s-node01} Pulled: Successfully pulled image "busybox"
Aug 15 02:45:23.913: INFO: At 2018-08-15 02:44:46 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {kubelet k8s-node01} Started: Started container
Aug 15 02:45:23.913: INFO: At 2018-08-15 02:44:46 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {kubelet k8s-node01} Created: Created container
Aug 15 02:45:23.913: INFO: At 2018-08-15 02:44:48 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {kubelet k8s-node01} Pulled: Successfully pulled image "busybox"
Aug 15 02:45:23.913: INFO: At 2018-08-15 02:44:48 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {kubelet k8s-node01} Created: Created container
Aug 15 02:45:23.913: INFO: At 2018-08-15 02:44:48 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {kubelet k8s-node01} Started: Started container
Aug 15 02:45:23.913: INFO: At 2018-08-15 02:45:20 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {kubelet k8s-node01} Killing: Killing container with id docker://test-container-subpath-emptydir-rx4j:Need to kill Pod
Aug 15 02:45:23.913: INFO: At 2018-08-15 02:45:20 +0000 UTC - event for pod-subpath-test-emptydir-rx4j: {kubelet k8s-node01} Killing: Killing container with id docker://test-container-volume-emptydir-rx4j:Need to kill Pod
Aug 15 02:45:23.955: INFO: POD                                             NODE        PHASE    GRACE  CONDITIONS
Aug 15 02:45:23.955: INFO: standalone-cinder-provisioner-7d6594d789-9mtb9  k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  }]
Aug 15 02:45:23.955: INFO: web-0                                           k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-15 00:43:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-15 00:44:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-15 00:43:43 +0000 UTC  }]
Aug 15 02:45:23.955: INFO: web-1                                           k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-15 00:30:31 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-15 00:30:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-15 00:30:31 +0000 UTC  }]
Aug 15 02:45:23.955: INFO: coredns-78fcdf6894-4g4b2                        k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:04 +0000 UTC  }]
Aug 15 02:45:23.955: INFO: coredns-78fcdf6894-v584t                        k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:12:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:12:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:12:21 +0000 UTC  }]
Aug 15 02:45:23.955: INFO: etcd-k8s-master                                 k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  }]
Aug 15 02:45:23.955: INFO: kube-apiserver-k8s-master                       k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 01:50:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  }]
Aug 15 02:45:23.955: INFO: kube-controller-manager-k8s-master              k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 22:05:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 22:05:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 22:05:28 +0000 UTC  }]
Aug 15 02:45:23.955: INFO: kube-flannel-ds-7df6r                           k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:12:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:22 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-17 17:12:31 +0000 UTC  }]
Aug 15 02:45:23.956: INFO: kube-flannel-ds-82r5x                           k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-11 12:39:43 +0000 UTC  }]
Aug 15 02:45:23.956: INFO: kube-proxy-hxp7z                                k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:17:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  }]
Aug 15 02:45:23.956: INFO: kube-proxy-zwrl4                                k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  }]
Aug 15 02:45:23.956: INFO: kube-scheduler-k8s-master                       k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 01:50:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:08 +0000 UTC  }]
Aug 15 02:45:23.956: INFO: metrics-server-86bd9d7667-twb2r                 k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-10 00:15:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  }]
Aug 15 02:45:23.956: INFO:
Aug 15 02:45:23.966: INFO:
Logging node info for node k8s-master
Aug 15 02:45:23.972: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master,UID:94f19db7-89e3-11e8-b234-fa163e420595,ResourceVersion:3139296,Generation:0,CreationTimestamp:2018-07-17 17:05:18 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: k8s-master,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"06:0e:73:28:c3:b1"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.108,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143394816 0} {<nil>} 4046284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038537216 0} {<nil>} 3943884Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-08-15 02:45:19 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-08-15 02:45:19 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-08-15 02:45:19 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-08-15 02:45:19 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-08-15 02:45:19 +0000 UTC 2018-07-31 23:04:27 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.108} {Hostname k8s-master}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1db2c06c39a54cd3a93a4e0a44823fd6,SystemUUID:1DB2C06C-39A5-4CD3-A93A-4E0A44823FD6,BootID:d2b66fba-cf4e-4205-b596-3ffb4e579c16,KernelVersion:4.4.0-130-generic,OSImage:Ubuntu 16.04.5 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[golang:1.10] 793901893} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[k8s.gcr.io/etcd-amd64:3.2.18] 218904307} {[k8s.gcr.io/kube-apiserver-amd64:v1.11.1] 186675825} {[k8s.gcr.io/kube-apiserver-amd64:v1.11.0] 186617744} {[k8s.gcr.io/kube-controller-manager-amd64:v1.11.1] 155252555} {[k8s.gcr.io/kube-controller-manager-amd64:v1.11.0] 155203118} {[k8s.gcr.io/nginx-slim:0.8] 110487599} {[nginx:latest] 108975101} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/kube-proxy-amd64:v1.11.0] 97772373} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[k8s.gcr.io/kube-scheduler-amd64:v1.11.1] 56781436} {[k8s.gcr.io/kube-scheduler-amd64:v1.11.0] 56757023} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3 gcr.io/kubernetes-e2e-test-images/resource-consumer:1.3] 49707607} {[quay.io/k8scsi/csi-attacher:v0.2.0] 45644524} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[quay.io/k8scsi/csi-provisioner:v0.2.1] 45078229} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[quay.io/k8scsi/driver-registrar:v0.2.0] 42385441} {[k8scloudprovider/cinder-provisioner:latest] 29292916} {[quay.io/k8scsi/hostpathplugin:v0.2.0] 17287699} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/resource-consumer/controller-amd64:1.0] 5902947} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[busybox:latest] 1162769} {[k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[kubernetes.io/iscsi/192.168.1.1:3260:iqn.2010-10.org.openstack:volume-64bf2ed1-a630-4c2e-8398-c0a853728e4e:1],VolumesAttached:[{kubernetes.io/iscsi/192.168.1.1:3260:iqn.2010-10.org.openstack:volume-64bf2ed1-a630-4c2e-8398-c0a853728e4e:1 }],Config:nil,},}
Aug 15 02:45:23.973: INFO:
Logging kubelet events for node k8s-master
Aug 15 02:45:23.978: INFO:
Logging pods the kubelet thinks is on node k8s-master
Aug 15 02:45:23.988: INFO: coredns-78fcdf6894-v584t started at 2018-08-11 12:12:21 +0000 UTC (0+1 container statuses recorded)
Aug 15 02:45:23.989: INFO:      Container coredns ready: true, restart count 0
Aug 15 02:45:23.990: INFO: coredns-78fcdf6894-4g4b2 started at 2018-08-11 12:39:04 +0000 UTC (0+1 container statuses recorded)
Aug 15 02:45:23.990: INFO:      Container coredns ready: true, restart count 0
Aug 15 02:45:23.991: INFO: metrics-server-86bd9d7667-twb2r started at 2018-08-03 08:45:39 +0000 UTC (0+1 container statuses recorded)
Aug 15 02:45:23.991: INFO:      Container metrics-server ready: true, restart count 1
Aug 15 02:45:23.992: INFO: kube-controller-manager-k8s-master started at <nil> (0+0 container statuses recorded)
Aug 15 02:45:23.993: INFO: web-1 started at 2018-08-15 00:30:31 +0000 UTC (0+1 container statuses recorded)
Aug 15 02:45:23.993: INFO:      Container nginx ready: true, restart count 0
Aug 15 02:45:23.994: INFO: kube-scheduler-k8s-master started at <nil> (0+0 container statuses recorded)
Aug 15 02:45:23.994: INFO: kube-flannel-ds-7df6r started at 2018-07-17 17:12:31 +0000 UTC (1+1 container statuses recorded)
Aug 15 02:45:23.995: INFO:      Init container install-cni ready: true, restart count 6
Aug 15 02:45:23.995: INFO:      Container kube-flannel ready: true, restart count 6
Aug 15 02:45:23.996: INFO: kube-proxy-zwrl4 started at 2018-07-31 23:08:37 +0000 UTC (0+1 container statuses recorded)
Aug 15 02:45:23.996: INFO:      Container kube-proxy ready: true, restart count 6
Aug 15 02:45:23.997: INFO: standalone-cinder-provisioner-7d6594d789-9mtb9 started at 2018-08-11 12:39:04 +0000 UTC (0+1 container statuses recorded)
Aug 15 02:45:23.997: INFO:      Container standalone-cinder-provisioner ready: true, restart count 0
Aug 15 02:45:23.998: INFO: kube-apiserver-k8s-master started at <nil> (0+0 container statuses recorded)
Aug 15 02:45:23.998: INFO: etcd-k8s-master started at <nil> (0+0 container statuses recorded)
Aug 15 02:45:24.026: INFO:
Latency metrics for node k8s-master
Aug 15 02:45:24.026: INFO:
Logging node info for node k8s-node01
Aug 15 02:45:24.029: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-node01,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-node01,UID:980d8d67-9515-11e8-a804-fa163e420595,ResourceVersion:3139299,Generation:0,CreationTimestamp:2018-07-31 23:01:01 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/zone: ,kubernetes.io/hostname: k8s-node01,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"22:57:d2:53:57:f8"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.109,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143394816 0} {<nil>} 4046284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038537216 0} {<nil>} 3943884Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-08-15 02:45:20 +0000 UTC 2018-08-10 00:17:13 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-08-15 02:45:20 +0000 UTC 2018-08-10 00:17:13 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-08-15 02:45:20 +0000 UTC 2018-08-10 00:17:13 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-08-15 02:45:20 +0000 UTC 2018-07-31 23:01:01 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-08-15 02:45:20 +0000 UTC 2018-08-10 00:17:23 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.109} {Hostname k8s-node01}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817a385b9de241668e47cd87cda24f47,SystemUUID:817A385B-9DE2-4166-8E47-CD87CDA24F47,BootID:89bf8417-1b59-4778-b31e-dcda7893ef77,KernelVersion:4.4.0-130-generic,OSImage:Ubuntu 16.04.4 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[humblec/glusterdynamic-provisioner:v1.0] 373281573} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[quay.io/kubernetes_incubator/nfs-provisioner:v1.0.9] 332415371} {[gcr.io/kubernetes-e2e-test-images/volume-nfs:0.8] 247157334} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils-amd64:1.0] 195659796} {[k8s.gcr.io/resource_consumer:beta] 132805424} {[k8s.gcr.io/nginx-slim:0.8] 110487599} {[nginx:latest] 108975101} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/kube-proxy-amd64:v1.11.0] 97772373} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3 gcr.io/kubernetes-e2e-test-images/resource-consumer:1.3] 49707607} {[quay.io/k8scsi/csi-attacher:v0.2.0] 45644524} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[quay.io/k8scsi/csi-provisioner:v0.2.1] 45078229} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[quay.io/k8scsi/driver-registrar:v0.2.0] 42385441} {[k8scloudprovider/cinder-provisioner:latest] 28582964} {[gcr.io/kubernetes-e2e-test-images/nettest-amd64:1.0] 27413498} {[quay.io/k8scsi/hostpathplugin:v0.2.0] 17287699} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/dnsutils-amd64:1.0] 9030162} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/resource-consumer/controller-amd64:1.0] 5902947} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver-amd64:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter-amd64:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness-amd64:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/fakegitserver-amd64:1.0] 4608683} {[k8s.gcr.io/k8s-dns-dnsmasq-amd64:1.14.5] 4324973} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester-amd64:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/port-forward-tester-amd64:1.0] 1992230} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user-amd64:1.0] 1450451} {[busybox:latest] 1162769} {[k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[kubernetes.io/iscsi/192.168.1.1:3260:iqn.2010-10.org.openstack:volume-8168a06f-4522-42c5-849d-9a38287b4869:1],VolumesAttached:[{kubernetes.io/iscsi/192.168.1.1:3260:iqn.2010-10.org.openstack:volume-8168a06f-4522-42c5-849d-9a38287b4869:1 }],Config:nil,},}
Aug 15 02:45:24.029: INFO:
Logging kubelet events for node k8s-node01
Aug 15 02:45:24.032: INFO:
Logging pods the kubelet thinks is on node k8s-node01
Aug 15 02:45:24.051: INFO: kube-flannel-ds-82r5x started at 2018-08-11 12:39:43 +0000 UTC (1+1 container statuses recorded)
Aug 15 02:45:24.051: INFO:      Init container install-cni ready: true, restart count 0
Aug 15 02:45:24.051: INFO:      Container kube-flannel ready: true, restart count 0
Aug 15 02:45:24.051: INFO: web-0 started at 2018-08-15 00:43:43 +0000 UTC (0+1 container statuses recorded)
Aug 15 02:45:24.051: INFO:      Container nginx ready: true, restart count 0
Aug 15 02:45:24.051: INFO: kube-proxy-hxp7z started at 2018-07-31 23:08:51 +0000 UTC (0+1 container statuses recorded)
Aug 15 02:45:24.051: INFO:      Container kube-proxy ready: true, restart count 2
Aug 15 02:45:24.106: INFO:
Latency metrics for node k8s-node01
Aug 15 02:45:24.106: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.9 Latency:30.093734s}
Aug 15 02:45:24.106: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.093734s}
STEP: Dumping a list of prepulled images on each node...
Aug 15 02:45:24.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-jppnz" for this suite.
Aug 15 02:45:30.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 15 02:45:30.357: INFO: namespace: e2e-tests-subpath-jppnz, resource: bindings, ignored listing per whitelist
Aug 15 02:45:30.363: INFO: namespace e2e-tests-subpath-jppnz deletion completed in 6.23618356s

~ Failure [50.684 seconds]
[sig-storage] Subpath
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Volume type: emptyDir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:148
    should unmount if pod is force deleted while kubelet is down [Disruptive][Slow] [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:276

    Expected error:
        <*errors.errorString | 0xc420a532d0>: {
            s: "No external address for pod pod-subpath-test-emptydir-rx4j on node k8s-node01",
        }
        No external address for pod pod-subpath-test-emptydir-rx4j on node k8s-node01
    not to have occurred

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/utils.go:161
oomichi commented 6 years ago

そもそも、kubelet が死んでいるような Disruptive なテストはスキップするべきでは? これって、本当に GCE とか AWS のテストで動いているの? Disruptive フラグのテストは Blacklist に入れたほうがよいのでは?

Disruptive の意味をテスト上確認する。

test/e2e/storage/subpath.go

 304                         It("should unmount if pod is force deleted while kubelet is down [Disruptive][Slow]", func() {
★kubelet がdown中に Podが強制削除される場合、umount されるべき
 305                                 if curVolType == "hostPath" || curVolType == "hostPathSymlink" {
 306                                         framework.Skipf("%s volume type does not support reconstruction, skipping", curVolType)
 307                                 }
 308                                 testSubpathReconstruction(f, pod, true)
 309                         })
...
 653 func testSubpathReconstruction(f *framework.Framework, pod *v1.Pod, forceDelete bool) {
 654         // This is mostly copied from TestVolumeUnmountsFromDeletedPodWithForceOption()
 655
 656         // Change to busybox
 657         pod.Spec.Containers[0].Image = imageutils.GetE2EImage(imageutils.BusyBox)
 658         pod.Spec.Containers[0].Command = []string{"/bin/sh", "-ec", "sleep 100000"}
 659         pod.Spec.Containers[1].Image = imageutils.GetE2EImage(imageutils.BusyBox)
 660         pod.Spec.Containers[1].Command = []string{"/bin/sh", "-ec", "sleep 100000"}
 661
 662         // If grace period is too short, then there is not enough time for the volume
 663         // manager to cleanup the volumes
 664         gracePeriod := int64(30)
 665         pod.Spec.TerminationGracePeriodSeconds = &gracePeriod
 666
 667         By(fmt.Sprintf("Creating pod %s", pod.Name))
 668         pod, err := f.ClientSet.CoreV1().Pods(f.Namespace.Name).Create(pod)
 669         Expect(err).ToNot(HaveOccurred(), "while creating pod")
 670
 671         err = framework.WaitForPodRunningInNamespace(f.ClientSet, pod)
 672         Expect(err).ToNot(HaveOccurred(), "while waiting for pod to be running")
 673
 674         pod, err = f.ClientSet.CoreV1().Pods(f.Namespace.Name).Get(pod.Name, metav1.GetOptions{})
 675         Expect(err).ToNot(HaveOccurred(), "while getting pod")
 676
 677         utils.TestVolumeUnmountsFromDeletedPodWithForceOption(f.ClientSet, f, pod, forceDelete, true)
 678 }

test/e2e/storage/utils/utils.go

194 // TestVolumeUnmountsFromDeletedPod tests that a volume unmounts if the client pod was deleted while the kubelet was down.
195 // forceDelete is true indicating whether the pod is forcefully deleted.
196 func TestVolumeUnmountsFromDeletedPodWithForceOption(c clientset.Interface, f *framework.Framework, clientPod *v1.Pod, forceDelete bool, checkSubpath boo    l) {
197         nodeIP, err := framework.GetHostExternalAddress(c, clientPod)
198         Expect(err).NotTo(HaveOccurred())
199         nodeIP = nodeIP + ":22"
200
201         By("Expecting the volume mount to be found.")
202         result, err := framework.SSH(fmt.Sprintf("mount | grep %s | grep -v volume-subpaths", clientPod.UID), nodeIP, framework.TestContext.Provider)
203         framework.LogSSHResult(result)
204         Expect(err).NotTo(HaveOccurred(), "Encountered SSH error.")
205         Expect(result.Code).To(BeZero(), fmt.Sprintf("Expected grep exit code of 0, got %d", result.Code))
206
207         if checkSubpath {
208                 By("Expecting the volume subpath mount to be found.")
209                 result, err := framework.SSH(fmt.Sprintf("cat /proc/self/mountinfo | grep %s | grep volume-subpaths", clientPod.UID), nodeIP, framework.T    estContext.Provider)
210                 framework.LogSSHResult(result)
211                 Expect(err).NotTo(HaveOccurred(), "Encountered SSH error.")
212                 Expect(result.Code).To(BeZero(), fmt.Sprintf("Expected grep exit code of 0, got %d", result.Code))
213         }
214
215         By("Stopping the kubelet.")
216         KubeletCommand(KStop, c, clientPod)
    ★39         KStop            KubeletOpt = "stop"
    ★"sudo systemctl stop kubelet"
217         defer func() {
218                 if err != nil {
219                         KubeletCommand(KStart, c, clientPod)
220                 }
221         }()
222         By(fmt.Sprintf("Deleting Pod %q", clientPod.Name))
223         if forceDelete {
224                 err = c.CoreV1().Pods(clientPod.Namespace).Delete(clientPod.Name, metav1.NewDeleteOptions(0))
225         } else {
226                 err = c.CoreV1().Pods(clientPod.Namespace).Delete(clientPod.Name, &metav1.DeleteOptions{})
227         }
228         Expect(err).NotTo(HaveOccurred())
229
230         By("Starting the kubelet and waiting for pod to delete.")
231         KubeletCommand(KStart, c, clientPod)
232         err = f.WaitForPodNotFound(clientPod.Name, framework.PodDeleteTimeout)
233         if err != nil {
234                 Expect(err).NotTo(HaveOccurred(), "Expected pod to be not found.")
235         }
236
237         if forceDelete {
238                 // With forceDelete, since pods are immediately deleted from API server, there is no way to be sure when volumes are torn down
239                 // so wait some time to finish
240                 time.Sleep(30 * time.Second)
241         }
242
243         By("Expecting the volume mount not to be found.")
244         result, err = framework.SSH(fmt.Sprintf("mount | grep %s | grep -v volume-subpaths", clientPod.UID), nodeIP, framework.TestContext.Provider)
245         framework.LogSSHResult(result)
246         Expect(err).NotTo(HaveOccurred(), "Encountered SSH error.")
247         Expect(result.Stdout).To(BeEmpty(), "Expected grep stdout to be empty (i.e. no mount found).")
248         framework.Logf("Volume unmounted on node %s", clientPod.Spec.NodeName)
249
250         if checkSubpath {
251                 By("Expecting the volume subpath mount not to be found.")
252                 result, err = framework.SSH(fmt.Sprintf("cat /proc/self/mountinfo | grep %s | grep volume-subpaths", clientPod.UID), nodeIP, framework.Te    stContext.Provider)
253                 framework.LogSSHResult(result)
254                 Expect(err).NotTo(HaveOccurred(), "Encountered SSH error.")
255                 Expect(result.Stdout).To(BeEmpty(), "Expected grep stdout to be empty (i.e. no subpath mount found).")
256                 framework.Logf("Subpath volume unmounted on node %s", clientPod.Spec.NodeName)
257         }
258 }
oomichi commented 6 years ago

Disruptive のスキップ方法を調査

--ginkgo.skip=\[Disruptive\]

で良いと思ったが、dryRun してみると

$ go run hack/e2e.go -- --provider=skeleton --test --test_args="--ginkgo.dryRun=true --ginkgo.skip=\[Disruptive\]" --check-version-skew=false
...
Ran 931 of 999 Specs in 0.020 seconds
SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 68 Skipped PASS

Ginkgo ran 1 suite in 259.974429ms
Test Suite Passed
2018/08/15 16:34:18 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.dryRun=true --ginkgo.skip=\[Disruptive\]' finished in 472.578283ms
2018/08/15 16:34:18 e2e.go:83: Done

と skip を指定するより実行テスト数が増えている。723 -> 931

$ go run hack/e2e.go -- --provider=skeleton --test --test_args="--ginkgo.dryRun=true" --check-version-skew=false
...
Ran 723 of 999 Specs in 0.019 seconds
SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 276 Skipped PASS

Ginkgo ran 1 suite in 249.1251ms
Test Suite Passed
2018/08/15 16:33:19 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.dryRun=true' finished in 456.905875ms
2018/08/15 16:33:19 e2e.go:83: Done

これは conformance.go でデフォルトの SkipRegex を指定しており、それが無効になるため。 k8s.io/test-infra/kubetest/conformance/conformance.go

 53         if o.SkipRegex == "" {
 54                 o.SkipRegex = "\".*(Feature)|(NFS)|(StatefulSet).*\""
 55         }

よって、下記のように指定する

$ go run hack/e2e.go -- --provider=skeleton --test --test_args="--ginkgo.dryRun=true --ginkgo.skip=.*(Feature)|(NFS)|(StatefulSet)|\[Disruptive\].*" --check-version-skew=false
...
Ran 633 of 999 Specs in 0.020 seconds
SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 366 Skipped PASS

Ginkgo ran 1 suite in 268.580751ms
Test Suite Passed
2018/08/15 16:40:18 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.dryRun=true --ginkgo.skip=.*(Feature)|(NFS)|(StatefulSet)|\[Disruptive\].*' finished in 473.354862ms
2018/08/15 16:40:18 e2e.go:83: Done
$