oomichi / try-kubernetes

12 stars 5 forks source link

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates resource limits of pods that are allowed to run [Conformance] #38

Closed oomichi closed 6 years ago

oomichi commented 6 years ago

Conformanceテスト失敗原因調査 https://github.com/oomichi/try-kubernetes/issues/36 の一部

まとめ

テスト目的は「Pods に対するリソース制限が正しく動作すること」を確認すること

テストログ

$ go run hack/e2e.go -- --provider=skeleton --test --test_args="--ginkgo.focus=validates\sresource\slimits\sof\spods\sthat\sare\sallowed\sto\srun" --check-version-skew=false
2018/08/07 20:42:48 e2e.go:79: Calling kubetest --verbose-commands=true --provider=skeleton --test --test_args=--ginkgo.focus=validates\sresource\slimits\sof\spods\sthat\sare\sallowed\sto\srun --check-version-skew=false...
2018/08/07 20:42:48 process.go:153: Running: ./hack/e2e-internal/e2e-status.sh
Skeleton Provider: prepare-e2e not implemented
Client Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.1-1+9f374f69bc4216", GitCommit:"9f374f69bc421648d9e18805e1ca84c93d6db309", GitTreeState:"clean", BuildDate:"2018-08-07T16:59:32Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
2018/08/07 20:42:48 process.go:155: Step './hack/e2e-internal/e2e-status.sh' finished in 147.33381ms
2018/08/07 20:42:48 process.go:153: Running: ./cluster/kubectl.sh --match-server-version=false version
2018/08/07 20:42:48 process.go:155: Step './cluster/kubectl.sh --match-server-version=false version' finished in 158.601161ms
2018/08/07 20:42:48 process.go:153: Running: ./hack/ginkgo-e2e.sh --ginkgo.focus=validates\sresource\slimits\sof\spods\sthat\sare\sallowed\sto\srun
Conformance test: not doing test setup.
Aug  7 20:42:49.099: INFO: Overriding default scale value of zero to 1
Aug  7 20:42:49.099: INFO: Overriding default milliseconds value of zero to 5000
I0807 20:42:49.186907   10541 e2e.go:333] Starting e2e run "726736d2-9a82-11e8-a8e1-fa163e738a69" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1533674568 - Will randomize all specs
Will run 1 of 999 specs

Aug  7 20:42:49.224: INFO: >>> kubeConfig: /home/ubuntu/admin.conf
Aug  7 20:42:49.226: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Aug  7 20:42:49.252: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Aug  7 20:42:49.274: INFO: 11 / 11 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Aug  7 20:42:49.274: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready.
Aug  7 20:42:49.279: INFO: Waiting for pods to enter Success, but no pods in "kube-system" match label map[name:e2e-image-puller]
Aug  7 20:42:49.279: INFO: Dumping network health container logs from all nodes...
Aug  7 20:42:49.283: INFO: e2e test version: v1.11.1-1+9f374f69bc4216
Aug  7 20:42:49.284: INFO: kube-apiserver version: v1.11.1
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial]
  validates resource limits of pods that are allowed to run  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:684
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
STEP: Creating a kubernetes client
Aug  7 20:42:49.287: INFO: >>> kubeConfig: /home/ubuntu/admin.conf
STEP: Building a namespace api object
Aug  7 20:42:49.353: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:80
Aug  7 20:42:49.355: INFO: Waiting up to 1m0s for all nodes to be ready
Aug  7 20:43:49.400: INFO: Waiting for terminating namespaces to be deleted...
Aug  7 20:43:49.405: INFO: Unexpected error occurred: Namespace e2e-tests-horizontal-pod-autoscaling-cndn6 is active
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
STEP: Collecting events from namespace "e2e-tests-sched-pred-rh6vr".
STEP: Found 0 events.
Aug  7 20:43:49.417: INFO: POD                                 NODE        PHASE    GRACE  CONDITIONS
Aug  7 20:43:49.417: INFO: coredns-78fcdf6894-fv44x            k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 14:54:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 14:54:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 14:54:44 +0000 UTC  }]
Aug  7 20:43:49.417: INFO: coredns-78fcdf6894-lw27z            k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 14:54:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 14:54:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 14:54:45 +0000 UTC  }]
Aug  7 20:43:49.418: INFO: etcd-k8s-master                     k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:17:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:17:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:17:04 +0000 UTC  }]
Aug  7 20:43:49.418: INFO: kube-apiserver-k8s-master           k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:17:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:17:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:17:04 +0000 UTC  }]
Aug  7 20:43:49.418: INFO: kube-controller-manager-k8s-master  k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:17:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:17:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:17:04 +0000 UTC  }]
Aug  7 20:43:49.418: INFO: kube-flannel-ds-7df6r               k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:12:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:17:18 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-17 17:12:31 +0000 UTC  }]
Aug  7 20:43:49.418: INFO: kube-flannel-ds-k4pc4               k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:46:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:46:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:46:11 +0000 UTC  }]
Aug  7 20:43:49.418: INFO: kube-proxy-hxp7z                    k8s-node01  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:23:42 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:51 +0000 UTC  }]
Aug  7 20:43:49.418: INFO: kube-proxy-zwrl4                    k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:17:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-07-31 23:08:37 +0000 UTC  }]
Aug  7 20:43:49.418: INFO: kube-scheduler-k8s-master           k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:17:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:17:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-02 17:17:04 +0000 UTC  }]
Aug  7 20:43:49.418: INFO: metrics-server-86bd9d7667-twb2r     k8s-master  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2018-08-03 08:45:39 +0000 UTC  }]
Aug  7 20:43:49.419: INFO:
Aug  7 20:43:49.422: INFO:
Logging node info for node k8s-master
Aug  7 20:43:49.425: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master,UID:94f19db7-89e3-11e8-b234-fa163e420595,ResourceVersion:2247541,Generation:0,CreationTimestamp:2018-07-17 17:05:18 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: k8s-master,node-role.kubernetes.io/master: ,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"1a:9d:81:1e:9d:0f"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.108,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143394816 0} {<nil>} 4046284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038537216 0} {<nil>} 3943884Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-08-07 20:43:40 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-08-07 20:43:40 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-08-07 20:43:40 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-08-07 20:43:40 +0000 UTC 2018-07-17 17:05:14 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-08-07 20:43:40 +0000 UTC 2018-07-31 23:04:27 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.108} {Hostname k8s-master}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1db2c06c39a54cd3a93a4e0a44823fd6,SystemUUID:1DB2C06C-39A5-4CD3-A93A-4E0A44823FD6,BootID:b14bfb61-a0cc-45f4-8b29-42ee08c00ac6,KernelVersion:4.4.0-130-generic,OSImage:Ubuntu 16.04.5 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[golang:1.10] 793901893} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[k8s.gcr.io/etcd-amd64:3.2.18] 218904307} {[k8s.gcr.io/kube-apiserver-amd64:v1.11.1] 186675825} {[k8s.gcr.io/kube-apiserver-amd64:v1.11.0] 186617744} {[k8s.gcr.io/kube-controller-manager-amd64:v1.11.1] 155252555} {[k8s.gcr.io/kube-controller-manager-amd64:v1.11.0] 155203118} {[nginx:latest] 108975101} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/kube-proxy-amd64:v1.11.0] 97772373} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[k8s.gcr.io/kube-scheduler-amd64:v1.11.1] 56781436} {[k8s.gcr.io/kube-scheduler-amd64:v1.11.0] 56757023} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3 gcr.io/kubernetes-e2e-test-images/resource-consumer:1.3] 49707607} {[quay.io/k8scsi/csi-attacher:v0.2.0] 45644524} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[quay.io/k8scsi/csi-provisioner:v0.2.1] 45078229} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[quay.io/k8scsi/driver-registrar:v0.2.0] 42385441} {[quay.io/k8scsi/hostpathplugin:v0.2.0] 17287699} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/resource-consumer/controller-amd64:1.0] 5902947} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[busybox:latest] 1162769} {[k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Aug  7 20:43:49.425: INFO:
Logging kubelet events for node k8s-master
Aug  7 20:43:49.428: INFO:
Logging pods the kubelet thinks is on node k8s-master
Aug  7 20:43:49.437: INFO: metrics-server-86bd9d7667-twb2r started at 2018-08-03 08:45:39 +0000 UTC (0+1 container statuses recorded)
Aug  7 20:43:49.437: INFO:      Container metrics-server ready: true, restart count 0
Aug  7 20:43:49.437: INFO: kube-proxy-zwrl4 started at 2018-07-31 23:08:37 +0000 UTC (0+1 container statuses recorded)
Aug  7 20:43:49.437: INFO:      Container kube-proxy ready: true, restart count 5
Aug  7 20:43:49.437: INFO: kube-controller-manager-k8s-master started at <nil> (0+0 container statuses recorded)
Aug  7 20:43:49.437: INFO: kube-flannel-ds-7df6r started at 2018-07-17 17:12:31 +0000 UTC (1+1 container statuses recorded)
Aug  7 20:43:49.437: INFO:      Init container install-cni ready: true, restart count 5
Aug  7 20:43:49.437: INFO:      Container kube-flannel ready: true, restart count 5
Aug  7 20:43:49.437: INFO: kube-apiserver-k8s-master started at <nil> (0+0 container statuses recorded)
Aug  7 20:43:49.437: INFO: kube-scheduler-k8s-master started at <nil> (0+0 container statuses recorded)
Aug  7 20:43:49.437: INFO: coredns-78fcdf6894-fv44x started at 2018-08-03 14:54:44 +0000 UTC (0+1 container statuses recorded)
Aug  7 20:43:49.437: INFO:      Container coredns ready: true, restart count 0
Aug  7 20:43:49.437: INFO: etcd-k8s-master started at <nil> (0+0 container statuses recorded)
Aug  7 20:43:49.481: INFO:
Latency metrics for node k8s-master
Aug  7 20:43:49.481: INFO:
Logging node info for node k8s-node01
Aug  7 20:43:49.483: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-node01,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-node01,UID:980d8d67-9515-11e8-a804-fa163e420595,ResourceVersion:2247549,Generation:0,CreationTimestamp:2018-07-31 23:01:01 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/zone: ,kubernetes.io/hostname: k8s-node01,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"a6:73:46:9f:a3:63"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 192.168.1.109,kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{41567956992 0} {<nil>} 40593708Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4143394816 0} {<nil>} 4046284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{37411161231 0} {<nil>} 37411161231 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4038537216 0} {<nil>} 3943884Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2018-08-07 20:43:46 +0000 UTC 2018-07-31 23:01:01 +0000 UTC KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2018-08-07 20:43:46 +0000 UTC 2018-07-31 23:01:01 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2018-08-07 20:43:46 +0000 UTC 2018-07-31 23:01:01 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2018-08-07 20:43:46 +0000 UTC 2018-07-31 23:01:01 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2018-08-07 20:43:46 +0000 UTC 2018-07-31 23:01:11 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 192.168.1.109} {Hostname k8s-node01}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817a385b9de241668e47cd87cda24f47,SystemUUID:817A385B-9DE2-4166-8E47-CD87CDA24F47,BootID:f47e3b62-dd98-42c4-91e2-9b657f85dfd3,KernelVersion:4.4.0-130-generic,OSImage:Ubuntu 16.04.4 LTS,ContainerRuntimeVersion:docker://1.11.2,KubeletVersion:v1.11.1,KubeProxyVersion:v1.11.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[humblec/glusterdynamic-provisioner:v1.0] 373281573} {[gcr.io/google-samples/gb-frontend-amd64:v5] 373099368} {[quay.io/kubernetes_incubator/nfs-provisioner:v1.0.9] 332415371} {[gcr.io/kubernetes-e2e-test-images/volume-nfs:0.8] 247157334} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils-amd64:1.0] 195659796} {[k8s.gcr.io/resource_consumer:beta] 132805424} {[nginx:latest] 108975101} {[k8s.gcr.io/nginx-slim-amd64:0.20] 103591055} {[gcr.io/google-samples/gb-redisslave-amd64:v2] 98945667} {[k8s.gcr.io/kube-proxy-amd64:v1.11.1] 97776424} {[k8s.gcr.io/kube-proxy-amd64:v1.11.0] 97772373} {[k8s.gcr.io/echoserver:1.10] 95361986} {[k8s.gcr.io/nginx-slim-amd64:0.21] 95339966} {[quay.io/coreos/flannel:v0.9.1-amd64] 51338831} {[gcr.io/kubernetes-e2e-test-images/resource-consumer-amd64:1.3 gcr.io/kubernetes-e2e-test-images/resource-consumer:1.3] 49707607} {[k8s.gcr.io/coredns:1.1.3] 45587362} {[gcr.io/google_containers/metrics-server-amd64:v0.2.1] 42541759} {[gcr.io/kubernetes-e2e-test-images/nettest-amd64:1.0] 27413498} {[gcr.io/kubernetes-e2e-test-images/net-amd64:1.0] 11393460} {[gcr.io/kubernetes-e2e-test-images/dnsutils-amd64:1.0] 9030162} {[gcr.io/kubernetes-e2e-test-images/hostexec-amd64:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/netexec-amd64:1.0] 6713741} {[gcr.io/kubernetes-e2e-test-images/redis-amd64:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/resource-consumer/controller-amd64:1.0] 5902947} {[gcr.io/kubernetes-e2e-test-images/serve-hostname-amd64:1.0] 5470001} {[gcr.io/kubernetes-e2e-test-images/nautilus-amd64:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten-amd64:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver-amd64:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter-amd64:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/liveness-amd64:1.0] 4608721} {[gcr.io/kubernetes-e2e-test-images/fakegitserver-amd64:1.0] 4608683} {[k8s.gcr.io/k8s-dns-dnsmasq-amd64:1.14.5] 4324973} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester-amd64:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/port-forward-tester-amd64:1.0] 1992230} {[gcr.io/kubernetes-e2e-test-images/mounttest-amd64:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user-amd64:1.0] 1450451} {[busybox:latest] 1162769} {[k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Aug  7 20:43:49.484: INFO:
Logging kubelet events for node k8s-node01
Aug  7 20:43:49.486: INFO:
Logging pods the kubelet thinks is on node k8s-node01
Aug  7 20:43:49.502: INFO: coredns-78fcdf6894-lw27z started at 2018-08-03 14:54:45 +0000 UTC (0+1 container statuses recorded)
Aug  7 20:43:49.502: INFO:      Container coredns ready: true, restart count 0
Aug  7 20:43:49.502: INFO: kube-proxy-hxp7z started at 2018-07-31 23:08:51 +0000 UTC (0+1 container statuses recorded)
Aug  7 20:43:49.502: INFO:      Container kube-proxy ready: true, restart count 1
Aug  7 20:43:49.502: INFO: kube-flannel-ds-k4pc4 started at 2018-08-03 08:46:11 +0000 UTC (1+1 container statuses recorded)
Aug  7 20:43:49.502: INFO:      Init container install-cni ready: true, restart count 0
Aug  7 20:43:49.502: INFO:      Container kube-flannel ready: true, restart count 0
Aug  7 20:43:49.590: INFO:
Latency metrics for node k8s-node01
STEP: Dumping a list of prepulled images on each node...
Aug  7 20:43:49.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-rh6vr" for this suite.
Aug  7 20:43:55.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  7 20:43:55.768: INFO: namespace: e2e-tests-sched-pred-rh6vr, resource: bindings, ignored listing per whitelist
Aug  7 20:43:55.772: INFO: namespace e2e-tests-sched-pred-rh6vr deletion completed in 6.163211777s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:71

~ Failure in Spec Setup (BeforeEach) [66.486 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance] [BeforeEach]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:684

  Expected error:
      <*errors.errorString | 0xc420a5c0e0>: {
          s: "Namespace e2e-tests-horizontal-pod-autoscaling-cndn6 is active",
      }
      Namespace e2e-tests-horizontal-pod-autoscaling-cndn6 is active
  not to have occurred

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:89
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug  7 20:43:55.773: INFO: Running AfterSuite actions on all node
Aug  7 20:43:55.773: INFO: Running AfterSuite actions on node 1

Summarizing 1 Failure:

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [BeforeEach] validates resource limits of pods that are allowed to run  [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:89

Ran 1 of 999 Specs in 66.550 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 998 Skipped --- FAIL: TestE2E (66.59s)
FAIL

Ginkgo ran 1 suite in 1m6.802698163s
Test Suite Failed
!!! Error in ./hack/ginkgo-e2e.sh:143
  Error in ./hack/ginkgo-e2e.sh:143. '"${ginkgo}" "${ginkgo_args[@]:+${ginkgo_args[@]}}" "${e2e_test}" -- "${auth_config[@]:+${auth_config[@]}}" --ginkgo.flakeAttempts="${FLAKE_ATTEMPTS}" --host="${KUBE_MASTER_URL}" --provider="${KUBERNETES_PROVIDER}" --gce-project="${PROJECT:-}" --gce-zone="${ZONE:-}" --gce-region="${REGION:-}" --gce-multizone="${MULTIZONE:-false}" --gke-cluster="${CLUSTER_NAME:-}" --kube-master="${KUBE_MASTER:-}" --cluster-tag="${CLUSTER_ID:-}" --cloud-config-file="${CLOUD_CONFIG:-}" --repo-root="${KUBE_ROOT}" --node-instance-group="${NODE_INSTANCE_GROUP:-}" --prefix="${KUBE_GCE_INSTANCE_PREFIX:-e2e}" --network="${KUBE_GCE_NETWORK:-${KUBE_GKE_NETWORK:-e2e}}" --node-tag="${NODE_TAG:-}" --master-tag="${MASTER_TAG:-}" --cluster-monitoring-mode="${KUBE_ENABLE_CLUSTER_MONITORING:-standalone}" --prometheus-monitoring="${KUBE_ENABLE_PROMETHEUS_MONITORING:-false}" ${KUBE_CONTAINER_RUNTIME:+"--container-runtime=${KUBE_CONTAINER_RUNTIME}"} ${MASTER_OS_DISTRIBUTION:+"--master-os-distro=${MASTER_OS_DISTRIBUTION}"} ${NODE_OS_DISTRIBUTION:+"--node-os-distro=${NODE_OS_DISTRIBUTION}"} ${NUM_NODES:+"--num-nodes=${NUM_NODES}"} ${E2E_REPORT_DIR:+"--report-dir=${E2E_REPORT_DIR}"} ${E2E_REPORT_PREFIX:+"--report-prefix=${E2E_REPORT_PREFIX}"} "${@:-}"' exited with status 1
Call stack:
  1: ./hack/ginkgo-e2e.sh:143 main(...)
Exiting with status 1
2018/08/07 20:43:55 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=validates\sresource\slimits\sof\spods\sthat\sare\sallowed\sto\srun' finished in 1m7.00985132s
2018/08/07 20:43:55 main.go:309: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=validates\sresource\slimits\sof\spods\sthat\sare\sallowed\sto\srun: exit status 1]
2018/08/07 20:43:55 e2e.go:81: err: exit status 1
exit status 1
oomichi commented 6 years ago

このテスト自体は Conformance test として 2017年3月から存在している。 前回 v1.10 のときは通っていたので、今回の環境から失敗するようになった模様。

oomichi commented 6 years ago

e2e テストの残骸 namespaces が残っていたから? エラーで出力された namespaces を削除してみる

$ kubectl get namespaces
NAME                                         STATUS    AGE
default                                      Active    21d
e2e-tests-horizontal-pod-autoscaling-cndn6   Active    20h
e2e-tests-horizontal-pod-autoscaling-fcq2t   Active    3h
e2e-tests-horizontal-pod-autoscaling-fhrzw   Active    21h
e2e-tests-horizontal-pod-autoscaling-k9d6r   Active    4h
e2e-tests-horizontal-pod-autoscaling-qbghv   Active    3d
e2e-tests-horizontal-pod-autoscaling-rhnzt   Active    3d
kube-public                                  Active    21d
kube-system                                  Active    21d
$ kubectl delete namespace e2e-tests-horizontal-pod-autoscaling-cndn6
...
~ Failure in Spec Setup (BeforeEach) [66.439 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance] [BeforeEach]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:684

  Expected error:
      <*errors.errorString | 0xc420a6ca00>: {
          s: "Namespace e2e-tests-horizontal-pod-autoscaling-fcq2t is active",
      }
      Namespace e2e-tests-horizontal-pod-autoscaling-fcq2t is active
  not to have occurred

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:89
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug  7 21:04:16.153: INFO: Running AfterSuite actions on all node
Aug  7 21:04:16.153: INFO: Running AfterSuite actions on node 1

Summarizing 1 Failure:

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [BeforeEach] validates resource limits of pods that are allowed to run  [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:89

Ran 1 of 999 Specs in 66.525 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 998 Skipped --- FAIL: TestE2E (66.56s)
FAIL

Ginkgo ran 1 suite in 1m6.76832776s
Test Suite Failed
!!! Error in ./hack/ginkgo-e2e.sh:143
  Error in ./hack/ginkgo-e2e.sh:143. '"${ginkgo}" "${ginkgo_args[@]:+${ginkgo_args[@]}}" "${e2e_test}" -- "${auth_config[@]:+${auth_config[@]}}" --ginkgo.flakeAttempts="${FLAKE_ATTEMPTS}" --host="${KUBE_MASTER_URL}" --provider="${KUBERNETES_PROVIDER}" --gce-project="${PROJECT:-}" --gce-zone="${ZONE:-}" --gce-region="${REGION:-}" --gce-multizone="${MULTIZONE:-false}" --gke-cluster="${CLUSTER_NAME:-}" --kube-master="${KUBE_MASTER:-}" --cluster-tag="${CLUSTER_ID:-}" --cloud-config-file="${CLOUD_CONFIG:-}" --repo-root="${KUBE_ROOT}" --node-instance-group="${NODE_INSTANCE_GROUP:-}" --prefix="${KUBE_GCE_INSTANCE_PREFIX:-e2e}" --network="${KUBE_GCE_NETWORK:-${KUBE_GKE_NETWORK:-e2e}}" --node-tag="${NODE_TAG:-}" --master-tag="${MASTER_TAG:-}" --cluster-monitoring-mode="${KUBE_ENABLE_CLUSTER_MONITORING:-standalone}" --prometheus-monitoring="${KUBE_ENABLE_PROMETHEUS_MONITORING:-false}" ${KUBE_CONTAINER_RUNTIME:+"--container-runtime=${KUBE_CONTAINER_RUNTIME}"} ${MASTER_OS_DISTRIBUTION:+"--master-os-distro=${MASTER_OS_DISTRIBUTION}"} ${NODE_OS_DISTRIBUTION:+"--node-os-distro=${NODE_OS_DISTRIBUTION}"} ${NUM_NODES:+"--num-nodes=${NUM_NODES}"} ${E2E_REPORT_DIR:+"--report-dir=${E2E_REPORT_DIR}"} ${E2E_REPORT_PREFIX:+"--report-prefix=${E2E_REPORT_PREFIX}"} "${@:-}"' exited with status 1
Call stack:
  1: ./hack/ginkgo-e2e.sh:143 main(...)
Exiting with status 1
2018/08/07 21:04:16 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=validates\sresource\slimits\sof\spods\sthat\sare\sallowed\sto\srun' finished in 1m6.974759394s
2018/08/07 21:04:16 main.go:309: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=validates\sresource\slimits\sof\spods\sthat\sare\sallowed\sto\srun: exit status 1]
2018/08/07 21:04:16 e2e.go:81: err: exit status 1
exit status 1

ほかの残骸 namespaces があったせいで、エラーになった。 残骸 namespaces を全て削除して再実行してみる。 → timeout エラーが発生する

$ kubectl delete namespace e2e-tests-horizontal-pod-autoscaling-k9d6r e2e-tests-horizontal-pod-autoscaling-qbghv e2e-tests-horizontal-pod-autoscaling-rhnzt
...
$ kubectl get namespaces
NAME          STATUS    AGE
default       Active    21d
kube-public   Active    21d
kube-system   Active    21d
$
$ go run hack/e2e.go -- --provider=skeleton --test --test_args="--ginkgo.focus=validates\sresource\slimits\sof\spods\sthat\sare\sallowed\sto\srun" --check-version-skew=false
...
Latency metrics for node k8s-node01
STEP: Dumping a list of prepulled images on each node...
Aug  7 21:09:23.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-5snqq" for this suite.
Aug  7 21:09:41.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  7 21:09:41.459: INFO: namespace: e2e-tests-sched-pred-5snqq, resource: bindings, ignored listing per whitelist
Aug  7 21:09:41.510: INFO: namespace e2e-tests-sched-pred-5snqq deletion completed in 18.146813186s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:71

~ Failure [202.622 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance] [It]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:684

  Expected error:
      <*errors.errorString | 0xc420085550>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition
  not to have occurred

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:730
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug  7 21:09:41.512: INFO: Running AfterSuite actions on all node
Aug  7 21:09:41.512: INFO: Running AfterSuite actions on node 1

Summarizing 1 Failure:

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates resource limits of pods that are allowed to run  [Conformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:730

Ran 1 of 999 Specs in 202.681 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 998 Skipped --- FAIL: TestE2E (202.71s)
FAIL

Ginkgo ran 1 suite in 3m22.92137333s
Test Suite Failed
!!! Error in ./hack/ginkgo-e2e.sh:143
  Error in ./hack/ginkgo-e2e.sh:143. '"${ginkgo}" "${ginkgo_args[@]:+${ginkgo_args[@]}}" "${e2e_test}" -- "${auth_config[@]:+${auth_config[@]}}" --ginkgo.flakeAttempts="${FLAKE_ATTEMPTS}" --host="${KUBE_MASTER_URL}" --provider="${KUBERNETES_PROVIDER}" --gce-project="${PROJECT:-}" --gce-zone="${ZONE:-}" --gce-region="${REGION:-}" --gce-multizone="${MULTIZONE:-false}" --gke-cluster="${CLUSTER_NAME:-}" --kube-master="${KUBE_MASTER:-}" --cluster-tag="${CLUSTER_ID:-}" --cloud-config-file="${CLOUD_CONFIG:-}" --repo-root="${KUBE_ROOT}" --node-instance-group="${NODE_INSTANCE_GROUP:-}" --prefix="${KUBE_GCE_INSTANCE_PREFIX:-e2e}" --network="${KUBE_GCE_NETWORK:-${KUBE_GKE_NETWORK:-e2e}}" --node-tag="${NODE_TAG:-}" --master-tag="${MASTER_TAG:-}" --cluster-monitoring-mode="${KUBE_ENABLE_CLUSTER_MONITORING:-standalone}" --prometheus-monitoring="${KUBE_ENABLE_PROMETHEUS_MONITORING:-false}" ${KUBE_CONTAINER_RUNTIME:+"--container-runtime=${KUBE_CONTAINER_RUNTIME}"} ${MASTER_OS_DISTRIBUTION:+"--master-os-distro=${MASTER_OS_DISTRIBUTION}"} ${NODE_OS_DISTRIBUTION:+"--node-os-distro=${NODE_OS_DISTRIBUTION}"} ${NUM_NODES:+"--num-nodes=${NUM_NODES}"} ${E2E_REPORT_DIR:+"--report-dir=${E2E_REPORT_DIR}"} ${E2E_REPORT_PREFIX:+"--report-prefix=${E2E_REPORT_PREFIX}"} "${@:-}"' exited with status 1
Call stack:
  1: ./hack/ginkgo-e2e.sh:143 main(...)
Exiting with status 1
2018/08/07 21:09:41 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=validates\sresource\slimits\sof\spods\sthat\sare\sallowed\sto\srun' finished in 3m23.132616477s
2018/08/07 21:09:41 main.go:309: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=validates\sresource\slimits\sof\spods\sthat\sare\sallowed\sto\srun: exit status 1]
2018/08/07 21:09:41 e2e.go:81: err: exit status 1
exit status 1
oomichi commented 6 years ago

AfterEach で失敗しているから、テストのメイン部分は正常動作して、後始末処理で失敗?

test/e2e/scheduling/predicates.go:730

722 // WaitForSchedulerAfterAction performs the provided action and then waits for
723 // scheduler to act on the given pod.
724 func WaitForSchedulerAfterAction(f *framework.Framework, action common.Action, ns, podName string, expectSuccess bool) {
725         predicate := scheduleFailureEvent(podName)
726         if expectSuccess {
727                 predicate = scheduleSuccessEvent(ns, podName, "" /* any node */)
728         }
729         success, err := common.ObserveEventAfterAction(f, predicate, action)
730         Expect(err).NotTo(HaveOccurred())
731         Expect(success).To(Equal(true))
732 }

ObserveEventAfterAction は上記の1箇所でしか呼ばれていない・・ test/e2e/common/events.go

 99 func ObserveEventAfterAction(f *framework.Framework, eventPredicate func(*v1.Event) bool, action Action) (bool, error) {
...
144         // Poll whether the informer has found a matching event with a timeout.
145         // Wait up 2 minutes polling every second.
146         timeout := 2 * time.Minute
147         interval := 1 * time.Second
148         err = wait.Poll(interval, timeout, func() (bool, error) {
149                 return observedMatchingEvent, nil
150         })
151         return err == nil, err   timeoutに関連しそうなエラー処理はここくらい
152 }
oomichi commented 6 years ago

クリーンデプロイ環境で発生しなくなった。