Closed k8s-github-robot closed 6 years ago
Run so broken it didn't make JUnit output!
Run so broken it didn't make JUnit output!
Run so broken it didn't make JUnit output!
Run so broken it didn't make JUnit output!
Multiple broken tests:
Failed: DumpClusterLogs {e2e.go}
Terminate testing after 15m after 2h30m0s timeout during dump cluster logs
Issues about this test specifically: #33722 #37578 #37974
Failed: TearDown {e2e.go}
Terminate testing after 15m after 2h30m0s timeout during teardown
Issues about this test specifically: #34118 #34795 #37058
Failed: DiffResources {e2e.go}
Error: 28 leaked resources
+NAME MACHINE_TYPE PREEMPTIBLE CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-0c7dce54 n1-standard-2 2016-12-05T21:38:06.205-08:00
+gke-bootstrap-e2e-default-pool-657dc85f n1-standard-2 2016-12-05T21:38:06.113-08:00
+gke-bootstrap-e2e-default-pool-9c23825b n1-standard-2 2016-12-05T21:38:06.173-08:00
+NAME LOCATION SCOPE NETWORK MANAGED INSTANCES
+gke-bootstrap-e2e-default-pool-0c7dce54-grp us-central1-f zone bootstrap-e2e Yes 3
+NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
+gke-bootstrap-e2e-default-pool-0c7dce54-13lr us-central1-f n1-standard-2 10.240.0.4 104.198.227.107 RUNNING
+gke-bootstrap-e2e-default-pool-0c7dce54-jlbx us-central1-f n1-standard-2 10.240.0.2 104.154.149.175 RUNNING
+gke-bootstrap-e2e-default-pool-0c7dce54-v99e us-central1-f n1-standard-2 10.240.0.3 104.198.234.132 RUNNING
+NAME ZONE SIZE_GB TYPE STATUS
+gke-bootstrap-e2e-default-pool-0c7dce54-13lr us-central1-f 100 pd-standard READY
+gke-bootstrap-e2e-default-pool-0c7dce54-jlbx us-central1-f 100 pd-standard READY
+gke-bootstrap-e2e-default-pool-0c7dce54-v99e us-central1-f 100 pd-standard READY
+default-route-73acc0a0f92ce9a4 bootstrap-e2e 10.240.0.0/16 1000
+default-route-7ef949067be29a75 bootstrap-e2e 0.0.0.0/0 default-internet-gateway 1000
+gke-bootstrap-e2e-6abfff86-3317baa3-bb77-11e6-81da-42010af00012 bootstrap-e2e 10.72.0.0/24 us-central1-f/instances/gke-bootstrap-e2e-default-pool-0c7dce54-v99e 1000
+gke-bootstrap-e2e-6abfff86-33832b06-bb77-11e6-81da-42010af00012 bootstrap-e2e 10.72.3.0/24 us-central1-b/instances/gke-bootstrap-e2e-default-pool-9c23825b-m3bl 1000
+gke-bootstrap-e2e-6abfff86-33d540d3-bb77-11e6-81da-42010af00012 bootstrap-e2e 10.72.4.0/24 us-central1-b/instances/gke-bootstrap-e2e-default-pool-9c23825b-l4o3 1000
+gke-bootstrap-e2e-6abfff86-34087100-bb77-11e6-81da-42010af00012 bootstrap-e2e 10.72.5.0/24 us-central1-a/instances/gke-bootstrap-e2e-default-pool-657dc85f-cj5j 1000
+gke-bootstrap-e2e-6abfff86-34095ae7-bb77-11e6-81da-42010af00012 bootstrap-e2e 10.72.6.0/24 us-central1-b/instances/gke-bootstrap-e2e-default-pool-9c23825b-g9vn 1000
+gke-bootstrap-e2e-6abfff86-34340836-bb77-11e6-81da-42010af00012 bootstrap-e2e 10.72.7.0/24 us-central1-f/instances/gke-bootstrap-e2e-default-pool-0c7dce54-jlbx 1000
+gke-bootstrap-e2e-6abfff86-34b251bb-bb77-11e6-81da-42010af00012 bootstrap-e2e 10.72.8.0/24 us-central1-f/instances/gke-bootstrap-e2e-default-pool-0c7dce54-13lr 1000
+gke-bootstrap-e2e-6abfff86-34e751ea-bb77-11e6-81da-42010af00012 bootstrap-e2e 10.72.1.0/24 us-central1-a/instances/gke-bootstrap-e2e-default-pool-657dc85f-310m 1000
+gke-bootstrap-e2e-6abfff86-34fb4fcb-bb77-11e6-81da-42010af00012 bootstrap-e2e 10.72.2.0/24 us-central1-a/instances/gke-bootstrap-e2e-default-pool-657dc85f-4ehr 1000
+gke-bootstrap-e2e-6abfff86-all bootstrap-e2e 10.72.0.0/14 udp,icmp,esp,ah,sctp,tcp
+gke-bootstrap-e2e-6abfff86-ssh bootstrap-e2e 130.211.160.57/32 tcp:22 gke-bootstrap-e2e-6abfff86-node
+gke-bootstrap-e2e-6abfff86-vms bootstrap-e2e 10.240.0.0/16 tcp:1-65535,udp:1-65535,icmp gke-bootstrap-e2e-6abfff86-node
Issues about this test specifically: #33373 #33416 #34060
Failed: Deferred TearDown {e2e.go}
Terminate testing after 15m after 2h30m0s timeout during teardown
Issues about this test specifically: #35658
Multiple broken tests:
Failed: TearDown {e2e.go}
Terminate testing after 15m after 2h30m0s timeout during teardown
Issues about this test specifically: #34118 #34795 #37058 #38207
Failed: DiffResources {e2e.go}
Error: 28 leaked resources
+NAME MACHINE_TYPE PREEMPTIBLE CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-152a8f74 n1-standard-2 2016-12-06T12:32:19.873-08:00
+gke-bootstrap-e2e-default-pool-6f5c76ec n1-standard-2 2016-12-06T12:32:19.696-08:00
+gke-bootstrap-e2e-default-pool-bbb0573e n1-standard-2 2016-12-06T12:32:19.772-08:00
+NAME LOCATION SCOPE NETWORK MANAGED INSTANCES
+gke-bootstrap-e2e-default-pool-6f5c76ec-grp us-central1-f zone bootstrap-e2e Yes 3
+NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
+gke-bootstrap-e2e-default-pool-6f5c76ec-8gwp us-central1-f n1-standard-2 10.240.0.2 104.154.145.85 RUNNING
+gke-bootstrap-e2e-default-pool-6f5c76ec-9xri us-central1-f n1-standard-2 10.240.0.4 104.198.128.246 RUNNING
+gke-bootstrap-e2e-default-pool-6f5c76ec-tkhb us-central1-f n1-standard-2 10.240.0.3 104.198.140.186 RUNNING
+NAME ZONE SIZE_GB TYPE STATUS
+gke-bootstrap-e2e-default-pool-6f5c76ec-8gwp us-central1-f 100 pd-standard READY
+gke-bootstrap-e2e-default-pool-6f5c76ec-9xri us-central1-f 100 pd-standard READY
+gke-bootstrap-e2e-default-pool-6f5c76ec-tkhb us-central1-f 100 pd-standard READY
+default-route-18eccb7942b995e4 bootstrap-e2e 0.0.0.0/0 default-internet-gateway 1000
+default-route-9b702fabd55a432e bootstrap-e2e 10.240.0.0/16 1000
+gke-bootstrap-e2e-c0013bea-5784b0f6-bbf3-11e6-a959-42010af00044 bootstrap-e2e 10.72.6.0/24 us-central1-b/instances/gke-bootstrap-e2e-default-pool-bbb0573e-uefe 1000
+gke-bootstrap-e2e-c0013bea-57c6dca3-bbf3-11e6-a959-42010af00044 bootstrap-e2e 10.72.2.0/24 us-central1-b/instances/gke-bootstrap-e2e-default-pool-bbb0573e-6iau 1000
+gke-bootstrap-e2e-c0013bea-5833a7f5-bbf3-11e6-a959-42010af00044 bootstrap-e2e 10.72.7.0/24 us-central1-a/instances/gke-bootstrap-e2e-default-pool-152a8f74-hr00 1000
+gke-bootstrap-e2e-c0013bea-584895da-bbf3-11e6-a959-42010af00044 bootstrap-e2e 10.72.3.0/24 us-central1-f/instances/gke-bootstrap-e2e-default-pool-6f5c76ec-tkhb 1000
+gke-bootstrap-e2e-c0013bea-584af563-bbf3-11e6-a959-42010af00044 bootstrap-e2e 10.72.4.0/24 us-central1-f/instances/gke-bootstrap-e2e-default-pool-6f5c76ec-9xri 1000
+gke-bootstrap-e2e-c0013bea-58658016-bbf3-11e6-a959-42010af00044 bootstrap-e2e 10.72.8.0/24 us-central1-b/instances/gke-bootstrap-e2e-default-pool-bbb0573e-xdg5 1000
+gke-bootstrap-e2e-c0013bea-588a72b6-bbf3-11e6-a959-42010af00044 bootstrap-e2e 10.72.5.0/24 us-central1-f/instances/gke-bootstrap-e2e-default-pool-6f5c76ec-8gwp 1000
+gke-bootstrap-e2e-c0013bea-59006df9-bbf3-11e6-a959-42010af00044 bootstrap-e2e 10.72.0.0/24 us-central1-a/instances/gke-bootstrap-e2e-default-pool-152a8f74-zj6z 1000
+gke-bootstrap-e2e-c0013bea-59a0bb5d-bbf3-11e6-a959-42010af00044 bootstrap-e2e 10.72.1.0/24 us-central1-a/instances/gke-bootstrap-e2e-default-pool-152a8f74-0199 1000
+gke-bootstrap-e2e-c0013bea-all bootstrap-e2e 10.72.0.0/14 tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-c0013bea-ssh bootstrap-e2e 104.198.147.135/32 tcp:22 gke-bootstrap-e2e-c0013bea-node
+gke-bootstrap-e2e-c0013bea-vms bootstrap-e2e 10.240.0.0/16 icmp,tcp:1-65535,udp:1-65535 gke-bootstrap-e2e-c0013bea-node
Issues about this test specifically: #33373 #33416 #34060
Failed: Deferred TearDown {e2e.go}
Terminate testing after 15m after 2h30m0s timeout during teardown
Issues about this test specifically: #35658
Failed: DumpClusterLogs {e2e.go}
Terminate testing after 15m after 2h30m0s timeout during dump cluster logs
Issues about this test specifically: #33722 #37578 #37974 #38206
Multiple broken tests:
Failed: DumpClusterLogs {e2e.go}
Terminate testing after 15m after 2h30m0s timeout during dump cluster logs
Issues about this test specifically: #33722 #37578 #37974 #38206
Failed: TearDown {e2e.go}
Terminate testing after 15m after 2h30m0s timeout during teardown
Issues about this test specifically: #34118 #34795 #37058 #38207
Failed: DiffResources {e2e.go}
Error: 28 leaked resources
+NAME MACHINE_TYPE PREEMPTIBLE CREATION_TIMESTAMP
+gke-bootstrap-e2e-default-pool-6cc2475d n1-standard-2 2016-12-07T16:24:59.702-08:00
+gke-bootstrap-e2e-default-pool-978452a9 n1-standard-2 2016-12-07T16:24:59.791-08:00
+gke-bootstrap-e2e-default-pool-c2a754cb n1-standard-2 2016-12-07T16:24:59.746-08:00
+NAME LOCATION SCOPE NETWORK MANAGED INSTANCES
+gke-bootstrap-e2e-default-pool-c2a754cb-grp us-central1-f zone bootstrap-e2e Yes 3
+NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
+gke-bootstrap-e2e-default-pool-c2a754cb-1ttv us-central1-f n1-standard-2 10.240.0.4 35.184.59.127 RUNNING
+gke-bootstrap-e2e-default-pool-c2a754cb-f2n3 us-central1-f n1-standard-2 10.240.0.2 35.184.65.115 RUNNING
+gke-bootstrap-e2e-default-pool-c2a754cb-zkqp us-central1-f n1-standard-2 10.240.0.3 35.184.72.3 RUNNING
+NAME ZONE SIZE_GB TYPE STATUS
+gke-bootstrap-e2e-default-pool-c2a754cb-1ttv us-central1-f 100 pd-standard READY
+gke-bootstrap-e2e-default-pool-c2a754cb-f2n3 us-central1-f 100 pd-standard READY
+gke-bootstrap-e2e-default-pool-c2a754cb-zkqp us-central1-f 100 pd-standard READY
+default-route-e7e8297d36893191 bootstrap-e2e 0.0.0.0/0 default-internet-gateway 1000
+default-route-fa3a183dce73a46c bootstrap-e2e 10.240.0.0/16 1000
+gke-bootstrap-e2e-2da47681-1e48dda4-bcdd-11e6-8cbb-42010af0004c bootstrap-e2e 10.72.7.0/24 us-central1-a/instances/gke-bootstrap-e2e-default-pool-978452a9-gz1g 1000
+gke-bootstrap-e2e-2da47681-1ebafed8-bcdd-11e6-8cbb-42010af0004c bootstrap-e2e 10.72.8.0/24 us-central1-b/instances/gke-bootstrap-e2e-default-pool-6cc2475d-tn5o 1000
+gke-bootstrap-e2e-2da47681-1f5c58a3-bcdd-11e6-8cbb-42010af0004c bootstrap-e2e 10.72.1.0/24 us-central1-f/instances/gke-bootstrap-e2e-default-pool-c2a754cb-1ttv 1000
+gke-bootstrap-e2e-2da47681-1f76f9cd-bcdd-11e6-8cbb-42010af0004c bootstrap-e2e 10.72.2.0/24 us-central1-b/instances/gke-bootstrap-e2e-default-pool-6cc2475d-s4tt 1000
+gke-bootstrap-e2e-2da47681-1f7e7b31-bcdd-11e6-8cbb-42010af0004c bootstrap-e2e 10.72.3.0/24 us-central1-a/instances/gke-bootstrap-e2e-default-pool-978452a9-5eli 1000
+gke-bootstrap-e2e-2da47681-1f8eb840-bcdd-11e6-8cbb-42010af0004c bootstrap-e2e 10.72.0.0/24 us-central1-f/instances/gke-bootstrap-e2e-default-pool-c2a754cb-f2n3 1000
+gke-bootstrap-e2e-2da47681-20089fbe-bcdd-11e6-8cbb-42010af0004c bootstrap-e2e 10.72.4.0/24 us-central1-b/instances/gke-bootstrap-e2e-default-pool-6cc2475d-i4fl 1000
+gke-bootstrap-e2e-2da47681-20531563-bcdd-11e6-8cbb-42010af0004c bootstrap-e2e 10.72.5.0/24 us-central1-a/instances/gke-bootstrap-e2e-default-pool-978452a9-lw47 1000
+gke-bootstrap-e2e-2da47681-20781f20-bcdd-11e6-8cbb-42010af0004c bootstrap-e2e 10.72.6.0/24 us-central1-f/instances/gke-bootstrap-e2e-default-pool-c2a754cb-zkqp 1000
+gke-bootstrap-e2e-2da47681-all bootstrap-e2e 10.72.0.0/14 tcp,udp,icmp,esp,ah,sctp
+gke-bootstrap-e2e-2da47681-ssh bootstrap-e2e 104.198.149.77/32 tcp:22 gke-bootstrap-e2e-2da47681-node
+gke-bootstrap-e2e-2da47681-vms bootstrap-e2e 10.240.0.0/16 icmp,tcp:1-65535,udp:1-65535 gke-bootstrap-e2e-2da47681-node
Issues about this test specifically: #33373 #33416 #34060
Failed: Deferred TearDown {e2e.go}
Terminate testing after 15m after 2h30m0s timeout during teardown
Issues about this test specifically: #35658
Run so broken it didn't make JUnit output!
Multiple broken tests:
Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:47
wait for pod "pod-secrets-466461f2-bfaa-11e6-8dd9-0242ac110005" to disappear
Expected success, but got an error:
<*errors.errorString | 0xc4203d3380>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122
Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:98
Expected error:
<*errors.errorString | 0xc420ebc030>: {
s: "failed to wait for pods running: [timed out waiting for the condition]",
}
failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1075
Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458
Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:326
Dec 11 06:07:09.209: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1974
Issues about this test specifically: #28437 #29084 #29256 #29397 #36671
Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
<*errors.errorString | 0xc42037c380>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68
Issues about this test specifically: #36288 #36913
Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Dec 11 06:17:43.399: Couldn't delete ns: "e2e-tests-kubectl-fn2bf": namespace e2e-tests-kubectl-fn2bf was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-kubectl-fn2bf was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354
Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774
Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:401
Expected error:
<*errors.errorString | 0xc4203fd580>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:237
Issues about this test specifically: #26168 #27450
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
<*errors.errorString | 0xc4204134b0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:544
Issues about this test specifically: #35283 #36867
Multiple broken tests:
Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Dec 13 13:43:24.363: Couldn't delete ns: "e2e-tests-disruption-62xzw": namespace e2e-tests-disruption-62xzw was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-62xzw was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354
Issues about this test specifically: #32668 #35405
Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
wait for pod "pod-secrets-09590b42-c17c-11e6-a07c-0242ac110007" to disappear
Expected success, but got an error:
<*errors.errorString | 0xc420413950>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122
Issues about this test specifically: #29221
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] Network should set TCP CLOSE_WAIT timeout {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kube_proxy.go:205
Expected error:
<*errors.errorString | 0xc4203bf570>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68
Issues about this test specifically: #36288 #36913
Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:56
wait for pod "pod-secrets-4d82bf26-c17c-11e6-8a2f-0242ac110007" to disappear
Expected success, but got an error:
<*errors.errorString | 0xc420415990>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122
Issues about this test specifically: #37529
Multiple broken tests:
Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
<*errors.errorString | 0xc420a06220>: {
s: "failed to wait for pods responding: timed out waiting for the condition",
}
failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1002
Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Dec 17 08:36:50.464: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.72.2.27:8080/hostName
retrieved map[]
expected map[netserver-6:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263
Issues about this test specifically: #33631 #33995 #34970
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1106
Dec 17 08:39:55.114: expected un-ready endpoint for Service webserver within 5m0s, stdout:
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1104
Issues about this test specifically: #26172
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Dec 17 08:44:53.853: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.1.47:8080/dial?request=hostName&protocol=http&host=10.72.2.31&port=8080&tries=1'
retrieved map[]
expected map[netserver-6:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210
Issues about this test specifically: #32375
Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
<*errors.errorString | 0xc420a98d30>: {
s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-17 08:32:00 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-17 08:32:31 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-17 08:32:00 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.72.2.32 StartTime:2016-12-17 08:32:00 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2016-12-17 08:32:00 -0800 PST,FinishedAt:2016-12-17 08:32:30 -0800 PST,ContainerID:docker://52c0a29c769d033c1a0cc0ab0ac9ed479390eefb89f854566145f19f69f20ffe,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://52c0a29c769d033c1a0cc0ab0ac9ed479390eefb89f854566145f19f69f20ffe}] QOSClass:}",
}
pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-17 08:32:00 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-17 08:32:31 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-17 08:32:00 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.72.2.32 StartTime:2016-12-17 08:32:00 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2016-12-17 08:32:00 -0800 PST,FinishedAt:2016-12-17 08:32:30 -0800 PST,ContainerID:docker://52c0a29c769d033c1a0cc0ab0ac9ed479390eefb89f854566145f19f69f20ffe,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://52c0a29c769d033c1a0cc0ab0ac9ed479390eefb89f854566145f19f69f20ffe}] QOSClass:}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48
Issues about this test specifically: #26171 #28188
Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:467
Expected error:
<*errors.errorString | 0xc420415c50>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68
Issues about this test specifically: #37056
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Dec 17 09:02:09.752: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.0.65:8080/dial?request=hostName&protocol=udp&host=10.72.2.49&port=8081&tries=1'
retrieved map[]
expected map[netserver-8:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210
Issues about this test specifically: #32830
Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:393
Expected error:
<*errors.errorString | 0xc4203aaa20>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68
Issues about this test specifically: #36970
Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:360
Expected error:
<*errors.errorString | 0xc420640140>: {
s: "service verification failed for: 10.75.252.253\nexpected [service2-9zdj1 service2-r59nr service2-wmf46]\nreceived [wget: download timed out]",
}
service verification failed for: 10.75.252.253
expected [service2-9zdj1 service2-r59nr service2-wmf46]
received [wget: download timed out]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:335
Issues about this test specifically: #26128 #26685 #33408 #36298
Multiple broken tests:
Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:190
Expected error:
<*errors.errorString | 0xc420754e50>: {
s: "err waiting for DNS replicas to satisfy 9, got 5: timed out waiting for the condition",
}
err waiting for DNS replicas to satisfy 9, got 5: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:189
Issues about this test specifically: #36569 #38446
Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:360
Expected error:
<*errors.errorString | 0xc420acc130>: {
s: "service verification failed for: 10.75.253.22\nexpected [service1-dm43k service1-frn9h service1-pn8mz]\nreceived [wget: download timed out]",
}
service verification failed for: 10.75.253.22
expected [service1-dm43k service1-frn9h service1-pn8mz]
received [wget: download timed out]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:332
Issues about this test specifically: #26128 #26685 #33408 #36298
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:467
Expected error:
<*errors.errorString | 0xc42043b540>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68
Issues about this test specifically: #37056
Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:348
Expected error:
<*errors.errorString | 0xc4203fb780>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:308
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.25.6 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-xtfxz execpod-sourceip-gke-bootstrap-e2e-default-pool-6e4213d3-i535q8 -- /bin/sh -c wget -T 30 -qO- 10.75.240.148:8080 | grep client_address] [] <nil> wget: download timed out\n [] <nil> 0xc4210f1f20 exit status 1 <nil> <nil> true [0xc4203ce638 0xc4203ce6c8 0xc4203ce730] [0xc4203ce638 0xc4203ce6c8 0xc4203ce730] [0xc4203ce698 0xc4203ce720] [0xbdb8f0 0xbdb8f0] 0xc420f81260 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.25.6 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-xtfxz execpod-sourceip-gke-bootstrap-e2e-default-pool-6e4213d3-i535q8 -- /bin/sh -c wget -T 30 -qO- 10.75.240.148:8080 | grep client_address] [] <nil> wget: download timed out
[] <nil> 0xc4210f1f20 exit status 1 <nil> <nil> true [0xc4203ce638 0xc4203ce6c8 0xc4203ce730] [0xc4203ce638 0xc4203ce6c8 0xc4203ce730] [0xc4203ce698 0xc4203ce720] [0xbdb8f0 0xbdb8f0] 0xc420f81260 <nil>}:
Command stdout:
stderr:
wget: download timed out
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2912
Issues about this test specifically: #31085 #34207 #37097
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Dec 18 10:03:28.876: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.72.0.47:8080/hostName
retrieved map[]
expected map[netserver-5:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263
Issues about this test specifically: #33631 #33995 #34970
Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
<*errors.errorString | 0xc420eeaac0>: {
s: "failed to wait for pods responding: timed out waiting for the condition",
}
failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:478
Issues about this test specifically: #26509 #26834 #29780 #35355 #38275
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Dec 18 10:41:40.651: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.6.95:8080/dial?request=hostName&protocol=http&host=10.72.0.59&port=8080&tries=1'
retrieved map[]
expected map[netserver-2:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210
Issues about this test specifically: #32375
Multiple broken tests:
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Dec 20 08:45:33.310: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.72.5.19:8080/hostName
retrieved map[]
expected map[netserver-0:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263
Issues about this test specifically: #33631 #33995 #34970
Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Dec 20 08:43:51.808: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:163
Issues about this test specifically: #32023
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:368
Dec 20 08:49:20.651: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1587
Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774
Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
<*errors.errorString | 0xc420905f20>: {
s: "failed to wait for pods responding: timed out waiting for the condition",
}
failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1002
Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458
Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
<*errors.errorString | 0xc4203c2270>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68
Issues about this test specifically: #32467 #36276
Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:336
Dec 20 08:46:17.425: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1962
Issues about this test specifically: #26425 #26715 #28825 #28880 #32854
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Dec 20 08:45:38.581: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.72.5.27 8081
retrieved map[]
expected map[netserver-1:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263
Issues about this test specifically: #35283 #36867
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1966/ Multiple broken tests:
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
<*errors.StatusError | 0xc4202b6680>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-scz2q--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-node-problem-detector-scz2q--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-scz2q--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
<*errors.StatusError | 0xc4206a0d00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-d3f1q--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-prestop-d3f1q--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-d3f1q--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190
Issues about this test specifically: #30287 #35953
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
<*errors.StatusError | 0xc4209a2900>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-qq0kx--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-ingress-qq0kx--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-qq0kx--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97
Issues about this test specifically: #38556
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1967/ Multiple broken tests:
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
<*errors.StatusError | 0xc4209fce80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-gw535--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-ingress-gw535--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-gw535--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97
Issues about this test specifically: #38556
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
<*errors.StatusError | 0xc420affd80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-hfdt1--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-node-problem-detector-hfdt1--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-hfdt1--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
<*errors.StatusError | 0xc420efee00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-kclth--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-prestop-kclth--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-kclth--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190
Issues about this test specifically: #30287 #35953
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1968/ Multiple broken tests:
Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Dec 22 07:27:42.080: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:163
Issues about this test specifically: #32023
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
<*errors.StatusError | 0xc4203eba00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-k8gt3--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-prestop-k8gt3--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-k8gt3--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190
Issues about this test specifically: #30287 #35953
Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:336
Dec 22 07:28:21.383: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1962
Issues about this test specifically: #26425 #26715 #28825 #28880 #32854
Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:360
Expected error:
<*errors.errorString | 0xc420e44ed0>: {
s: "service verification failed for: 10.75.254.238\nexpected [service1-ncsjp service1-r66w3 service1-sgjsl]\nreceived [service1-r66w3 service1-sgjsl]",
}
service verification failed for: 10.75.254.238
expected [service1-ncsjp service1-r66w3 service1-sgjsl]
received [service1-r66w3 service1-sgjsl]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:332
Issues about this test specifically: #26128 #26685 #33408 #36298
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
<*errors.StatusError | 0xc421361080>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-6z2p0--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-ingress-6z2p0--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-6z2p0--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97
Issues about this test specifically: #38556
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Dec 22 07:35:43.791: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.5.52:8080/dial?request=hostName&protocol=udp&host=10.72.3.68&port=8081&tries=1'
retrieved map[]
expected map[netserver-8:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210
Issues about this test specifically: #32830
Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:467
Expected error:
<*errors.errorString | 0xc4203c34f0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68
Issues about this test specifically: #37056
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
<*errors.StatusError | 0xc420387a80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-8ml0l--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-node-problem-detector-8ml0l--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-8ml0l--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1969/ Multiple broken tests:
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
<*errors.StatusError | 0xc420d3ae80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-sv0tn--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-node-problem-detector-sv0tn--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-sv0tn--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
<*errors.StatusError | 0xc4202b5380>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-1xdqf--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-prestop-1xdqf--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-1xdqf--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190
Issues about this test specifically: #30287 #35953
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
<*errors.StatusError | 0xc4213fd380>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-hmfrm--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-ingress-hmfrm--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-hmfrm--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97
Issues about this test specifically: #38556
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1970/ Multiple broken tests:
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
<*errors.StatusError | 0xc42039e380>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-lr8mg--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-prestop-lr8mg--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-lr8mg--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190
Issues about this test specifically: #30287 #35953
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
<*errors.StatusError | 0xc42047d400>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-w9cks--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-ingress-w9cks--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-w9cks--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97
Issues about this test specifically: #38556
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
<*errors.StatusError | 0xc420e4a480>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-xsbbp--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-node-problem-detector-xsbbp--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-xsbbp--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1971/ Multiple broken tests:
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
<*errors.StatusError | 0xc420e18500>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-ingress-4whzq--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-ingress-4whzq--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-ingress-4whzq--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:97
Issues about this test specifically: #38556
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
<*errors.StatusError | 0xc420372300>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-node-problem-detector-cxp2z--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-node-problem-detector-cxp2z--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-node-problem-detector-cxp2z--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:84
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
<*errors.StatusError | 0xc4202bff80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "clusterrolebindings.rbac.authorization.k8s.io \"e2e-tests-prestop-q9zhk--cluster-admin\" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]",
Reason: "Forbidden",
Details: {
Name: "e2e-tests-prestop-q9zhk--cluster-admin",
Group: "rbac.authorization.k8s.io",
Kind: "clusterrolebindings",
Causes: nil,
RetryAfterSeconds: 0,
},
Code: 403,
},
}
clusterrolebindings.rbac.authorization.k8s.io "e2e-tests-prestop-q9zhk--cluster-admin" is forbidden: attempt to grant extra privileges: [{[*] <nil> [*] [*] [] []} {[*] <nil> [] [] [] [*]}] user=&{kubekins@kubernetes-jenkins.iam.gserviceaccount.com [system:authenticated] map[]} ownerrules=[{[create] <nil> [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] <nil> [] [] [] [/api /api/* /apis /apis/* /version]}] ruleResolutionErrors=[]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:190
Issues about this test specifically: #30287 #35953
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1972/ Multiple broken tests:
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
<*errors.StatusError | 0xc42039e100>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102
Issues about this test specifically: #38556
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
<*errors.StatusError | 0xc4201cf500>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195
Issues about this test specifically: #30287 #35953
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
<*errors.StatusError | 0xc420dcfe00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1973/ Multiple broken tests:
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
<*errors.StatusError | 0xc421261000>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
<*errors.StatusError | 0xc421014680>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102
Issues about this test specifically: #38556
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
<*errors.StatusError | 0xc4210b7180>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195
Issues about this test specifically: #30287 #35953
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1974/ Multiple broken tests:
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
<*errors.StatusError | 0xc421262200>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
<*errors.StatusError | 0xc421079680>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195
Issues about this test specifically: #30287 #35953
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
<*errors.StatusError | 0xc4211b5400>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102
Issues about this test specifically: #38556
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1975/ Multiple broken tests:
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
<*errors.StatusError | 0xc420fa9b80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195
Issues about this test specifically: #30287 #35953
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
<*errors.StatusError | 0xc420237000>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
<*errors.StatusError | 0xc4201ee380>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102
Issues about this test specifically: #38556
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1976/ Multiple broken tests:
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
<*errors.StatusError | 0xc4206e4d00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195
Issues about this test specifically: #30287 #35953
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
<*errors.StatusError | 0xc42009ab80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
<*errors.StatusError | 0xc4210b6380>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102
Issues about this test specifically: #38556
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1977/ Multiple broken tests:
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
<*errors.StatusError | 0xc420c1f480>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195
Issues about this test specifically: #30287 #35953
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
<*errors.StatusError | 0xc4201c5500>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
<*errors.StatusError | 0xc420a39c00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102
Issues about this test specifically: #38556
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1978/ Multiple broken tests:
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
<*errors.StatusError | 0xc42101b300>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195
Issues about this test specifically: #30287 #35953
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
<*errors.StatusError | 0xc420c33500>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102
Issues about this test specifically: #38556
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
<*errors.StatusError | 0xc4201d9680>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/1979/ Multiple broken tests:
Failed: [k8s.io] Loadbalancing: L7 [k8s.io] Nginx should conform to Ingress spec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:103
Expected error:
<*errors.StatusError | 0xc421109300>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ingress.go:102
Issues about this test specifically: #38556
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:90
Expected error:
<*errors.StatusError | 0xc420d8e300>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:89
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:196
Expected error:
<*errors.StatusError | 0xc420983b80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server could not find the requested resource",
Reason: "NotFound",
Details: {Name: "", Group: "", Kind: "", Causes: nil, RetryAfterSeconds: 0},
Code: 404,
},
}
the server could not find the requested resource
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:195
Issues about this test specifically: #30287 #35953
Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1048
Dec 22 12:39:21.972: expected node port (32467) to not be in use in 5m0s, stdout:
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1039
Issues about this test specifically: #37274
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/2018/ Multiple broken tests:
Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:308
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.198.100 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-njmxs execpod-sourceip-gke-bootstrap-e2e-default-pool-5d78cb74-mxrj3k -- /bin/sh -c wget -T 30 -qO- 10.75.249.223:8080 | grep client_address] [] <nil> wget: download timed out\n [] <nil> 0xc4213ca4b0 exit status 1 <nil> <nil> true [0xc4206a4008 0xc4206a4020 0xc4206a4040] [0xc4206a4008 0xc4206a4020 0xc4206a4040] [0xc4206a4018 0xc4206a4030] [0xbe79c0 0xbe79c0] 0xc420f7a4e0 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.154.198.100 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-njmxs execpod-sourceip-gke-bootstrap-e2e-default-pool-5d78cb74-mxrj3k -- /bin/sh -c wget -T 30 -qO- 10.75.249.223:8080 | grep client_address] [] <nil> wget: download timed out
[] <nil> 0xc4213ca4b0 exit status 1 <nil> <nil> true [0xc4206a4008 0xc4206a4020 0xc4206a4040] [0xc4206a4008 0xc4206a4020 0xc4206a4040] [0xc4206a4018 0xc4206a4030] [0xbe79c0 0xbe79c0] 0xc420f7a4e0 <nil>}:
Command stdout:
stderr:
wget: download timed out
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2912
Issues about this test specifically: #31085 #34207 #37097
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
<*errors.errorString | 0xc420ea8740>: {
s: "failed to wait for pods responding: timed out waiting for the condition",
}
failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:889
Issues about this test specifically: #29629 #36270 #37462
Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:360
Expected error:
<*errors.errorString | 0xc4209fe4f0>: {
s: "service verification failed for: 10.75.251.164\nexpected [service3-3lhd2 service3-91cdl service3-ct4g3]\nreceived [service3-3lhd2 service3-91cdl]",
}
service verification failed for: 10.75.251.164
expected [service3-3lhd2 service3-91cdl service3-ct4g3]
received [service3-3lhd2 service3-91cdl]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:359
Issues about this test specifically: #26128 #26685 #33408 #36298
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Dec 23 06:58:42.628: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.1.96:8080/dial?request=hostName&protocol=http&host=10.72.8.65&port=8080&tries=1'
retrieved map[]
expected map[netserver-5:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210
Issues about this test specifically: #32375
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/2073/ Multiple broken tests:
Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:67
Expected error:
<*errors.StatusError | 0xc420655d00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Error: 'write tcp 10.240.0.5:38032->35.184.87.67:22: use of closed network connection'\\nTrying to reach: 'http://gke-bootstrap-e2e-default-pool-4a917edd-cdb7:4194/containers/'\") has prevented the request from succeeding",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Error: 'write tcp 10.240.0.5:38032->35.184.87.67:22: use of closed network connection'\nTrying to reach: 'http://gke-bootstrap-e2e-default-pool-4a917edd-cdb7:4194/containers/'",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 503,
},
}
an error on the server ("Error: 'write tcp 10.240.0.5:38032->35.184.87.67:22: use of closed network connection'\nTrying to reach: 'http://gke-bootstrap-e2e-default-pool-4a917edd-cdb7:4194/containers/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:326
Issues about this test specifically: #37435
Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet_etc_hosts.go:54
failed to execute command in pod test-host-network-pod, container busybox-2: error dialing backend: ssh: unexpected packet in response to channel open: <nil>
Expected error:
<*errors.StatusError | 0xc420b30280>: {
ErrStatus: {
TypeMeta: {Kind: "Status", APIVersion: "v1"},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "error dialing backend: ssh: unexpected packet in response to channel open: <nil>",
Reason: "",
Details: nil,
Code: 500,
},
}
error dialing backend: ssh: unexpected packet in response to channel open: <nil>
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:107
Issues about this test specifically: #37502
Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:438
Expected error:
<*errors.errorString | 0xc420414d10>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:220
Issues about this test specifically: #28337
Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:360
Expected error:
<*errors.errorString | 0xc420e52240>: {
s: "service verification failed for: 10.75.248.6\nexpected [service3-65vgk service3-q10tt service3-zqdzp]\nreceived [service3-65vgk service3-q10tt]",
}
service verification failed for: 10.75.248.6
expected [service3-65vgk service3-q10tt service3-zqdzp]
received [service3-65vgk service3-q10tt]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:359
Issues about this test specifically: #26128 #26685 #33408 #36298
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Dec 24 12:31:21.710: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.7.35:8080/dial?request=hostName&protocol=udp&host=10.72.1.33&port=8081&tries=1'
retrieved map[]
expected map[netserver-4:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210
Issues about this test specifically: #32830
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/2132/ Multiple broken tests:
Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:608
Expected
<string>:
to contain substring
<string>: No resources found
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:601
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Dec 25 16:00:47.237: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.0.34:8080/dial?request=hostName&protocol=http&host=10.72.4.34&port=8080&tries=1'
retrieved map[]
expected map[netserver-8:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210
Issues about this test specifically: #32375
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:68
Expected error:
<*errors.errorString | 0xc421819120>: {
s: "failed to wait for pods responding: timed out waiting for the condition",
}
failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:327
Issues about this test specifically: #31075 #36286 #38041
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
<*errors.StatusError | 0xc420f80e80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.72.5.16:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'\") has prevented the request from succeeding (post services rc-light-ctrl)",
Reason: "InternalError",
Details: {
Name: "rc-light-ctrl",
Group: "",
Kind: "services",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Error: 'EOF'\nTrying to reach: 'http://10.72.5.16:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 503,
},
}
an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.72.5.16:8080/ConsumeCPU?durationSec=30&millicores=50&requestSizeMillicores=20'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:214
Issues about this test specifically: #27196 #28998 #32403 #33341
Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Dec 25 15:57:15.206: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:163
Issues about this test specifically: #30981
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Dec 25 15:53:30.877: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.72.4.35 8081
retrieved map[]
expected map[netserver-8:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263
Issues about this test specifically: #35283 #36867
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Dec 25 16:00:54.815: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.72.4.87:8080/hostName
retrieved map[]
expected map[netserver-5:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263
Issues about this test specifically: #33631 #33995 #34970
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Dec 25 16:01:23.096: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.2.48:8080/dial?request=hostName&protocol=udp&host=10.72.4.42&port=8081&tries=1'
retrieved map[]
expected map[netserver-4:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210
Issues about this test specifically: #32830
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/2740/ Multiple broken tests:
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1142
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.237.24 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-vswd7] [] <nil> Unable to connect to the server: dial tcp 104.197.237.24:443: i/o timeout\n [] <nil> 0xc420e5d020 exit status 1 <nil> <nil> true [0xc420036c70 0xc420036c90 0xc420036cc0] [0xc420036c70 0xc420036c90 0xc420036cc0] [0xc420036c88 0xc420036ca8] [0xbf3420 0xbf3420] 0xc420eb7aa0 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 104.197.237.24:443: i/o timeout\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.237.24 --kubeconfig=/workspace/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=gcr.io/google_containers/nginx-slim:0.7 --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-vswd7] [] <nil> Unable to connect to the server: dial tcp 104.197.237.24:443: i/o timeout
[] <nil> 0xc420e5d020 exit status 1 <nil> <nil> true [0xc420036c70 0xc420036c90 0xc420036cc0] [0xc420036c70 0xc420036c90 0xc420036cc0] [0xc420036c88 0xc420036ca8] [0xbf3420 0xbf3420] 0xc420eb7aa0 <nil>}:
Command stdout:
stderr:
Unable to connect to the server: dial tcp 104.197.237.24:443: i/o timeout
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:173
Issues about this test specifically: #26138 #28429 #28737 #38064
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1106
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.237.24 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-rc --image=gcr.io/google_containers/nginx-slim:0.7 --generator=run/v1 --namespace=e2e-tests-kubectl-2tj3b] [] <nil> Unable to connect to the server: dial tcp 104.197.237.24:443: i/o timeout\n [] <nil> 0xc420f9a7b0 exit status 1 <nil> <nil> true [0xc420d7e1d0 0xc420d7e1e8 0xc420d7e200] [0xc420d7e1d0 0xc420d7e1e8 0xc420d7e200] [0xc420d7e1e0 0xc420d7e1f8] [0xbf3420 0xbf3420] 0xc4206a3b00 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: dial tcp 104.197.237.24:443: i/o timeout\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.237.24 --kubeconfig=/workspace/.kube/config run e2e-test-nginx-rc --image=gcr.io/google_containers/nginx-slim:0.7 --generator=run/v1 --namespace=e2e-tests-kubectl-2tj3b] [] <nil> Unable to connect to the server: dial tcp 104.197.237.24:443: i/o timeout
[] <nil> 0xc420f9a7b0 exit status 1 <nil> <nil> true [0xc420d7e1d0 0xc420d7e1e8 0xc420d7e200] [0xc420d7e1d0 0xc420d7e1e8 0xc420d7e200] [0xc420d7e1e0 0xc420d7e1f8] [0xbf3420 0xbf3420] 0xc4206a3b00 <nil>}:
Command stdout:
stderr:
Unable to connect to the server: dial tcp 104.197.237.24:443: i/o timeout
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2044
Issues about this test specifically: #28507 #29315 #35595
Failed: list nodes {e2e.go}
exit status 1
Issues about this test specifically: #38667
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/2887/ Multiple broken tests:
Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:154
Expected error:
<*errors.errorString | 0xc4210121c0>: {
s: "failed to update pod \"annotationupdate5a9af50c-d6c0-11e6-bc67-0242ac110007\": the server cannot complete the requested operation at this time, try again later (put pods annotationupdate5a9af50c-d6c0-11e6-bc67-0242ac110007)",
}
failed to update pod "annotationupdate5a9af50c-d6c0-11e6-bc67-0242ac110007": the server cannot complete the requested operation at this time, try again later (put pods annotationupdate5a9af50c-d6c0-11e6-bc67-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:111
Issues about this test specifically: #28462 #33782 #34014 #37374
Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Expected error:
<*errors.errorString | 0xc420414a60>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:230
Issues about this test specifically: #29221
Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:66
Expected error:
<*url.Error | 0xc42144a5d0>: {
Op: "Get",
URL: "https://104.198.128.145/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-dns%2Cmetadata.namespace%3Dkube-system",
Err: {
Op: "dial",
Net: "tcp",
Source: nil,
Addr: {
IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 198, 128, 145],
Port: 443,
Zone: "",
},
Err: {
Syscall: "getsockopt",
Err: 0x6f,
},
},
}
Get https://104.198.128.145/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-dns%2Cmetadata.namespace%3Dkube-system: dial tcp 104.198.128.145:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_configmap.go:228
Issues about this test specifically: #37144
Failed: [k8s.io] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:190
Expected error:
<*errors.errorString | 0xc420225ed0>: {
s: "err waiting for DNS replicas to satisfy 9, got 0: the server cannot complete the requested operation at this time, try again later (get deployments.extensions)",
}
err waiting for DNS replicas to satisfy 9, got 0: the server cannot complete the requested operation at this time, try again later (get deployments.extensions)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns_autoscaling.go:151
Issues about this test specifically: #36569 #38446
Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:107
Expected error:
<*errors.StatusError | 0xc42147a580>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server has asked for the client to provide credentials (get pods)",
Reason: "Unauthorized",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Unauthorized",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 401,
},
}
the server has asked for the client to provide credentials (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:853
Issues about this test specifically: #37361 #37919
Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:329
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.128.145 --kubeconfig=/workspace/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-46pcs] [] 0xc421979ea0 warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when stopping \"STDIN\": Scaling the resource failed with: the server cannot complete the requested operation at this time, try again later (put replicationcontrollers update-demo-nautilus); Current resource version 12839\n [] <nil> 0xc421277ce0 exit status 1 <nil> <nil> true [0xc421680120 0xc421680148 0xc421680158] [0xc421680120 0xc421680148 0xc421680158] [0xc421680128 0xc421680140 0xc421680150] [0xbf4fd0 0xbf50d0 0xbf50d0] 0xc421071b00 <nil>}:\nCommand stdout:\n\nstderr:\nwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when stopping \"STDIN\": Scaling the resource failed with: the server cannot complete the requested operation at this time, try again later (put replicationcontrollers update-demo-nautilus); Current resource version 12839\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.128.145 --kubeconfig=/workspace/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-46pcs] [] 0xc421979ea0 warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
error: error when stopping "STDIN": Scaling the resource failed with: the server cannot complete the requested operation at this time, try again later (put replicationcontrollers update-demo-nautilus); Current resource version 12839
[] <nil> 0xc421277ce0 exit status 1 <nil> <nil> true [0xc421680120 0xc421680148 0xc421680158] [0xc421680120 0xc421680148 0xc421680158] [0xc421680128 0xc421680140 0xc421680150] [0xbf4fd0 0xbf50d0 0xbf50d0] 0xc421071b00 <nil>}:
Command stdout:
stderr:
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
error: error when stopping "STDIN": Scaling the resource failed with: the server cannot complete the requested operation at this time, try again later (put replicationcontrollers update-demo-nautilus); Current resource version 12839
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2044
Issues about this test specifically: #28437 #29084 #29256 #29397 #36671
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any stateful pod is unhealthy {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:348
Expected error:
<*errors.StatusError | 0xc4214ac900>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server has asked for the client to provide credentials (get pods)",
Reason: "Unauthorized",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Unauthorized",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 401,
},
}
the server has asked for the client to provide credentials (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:853
Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Jan 9 15:18:11.668: Couldn't delete ns: "e2e-tests-resourcequota-9ksq4": the server has asked for the client to provide credentials (get ingresses.extensions) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server has asked for the client to provide credentials (get ingresses.extensions)", Reason:"Unauthorized", Details:(*v1.StatusDetails)(0xc4215e95e0), Code:401}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354
Issues about this test specifically: #31158 #34303
Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:271
Expected error:
<*errors.errorString | 0xc420e160e0>: {
s: "Failed to get pod \"ss-1\": Get https://104.198.128.145/api/v1/namespaces/e2e-tests-statefulset-fx745/pods/ss-1: read tcp 172.17.0.7:44889->104.198.128.145:443: read: connection timed out",
}
Failed to get pod "ss-1": Get https://104.198.128.145/api/v1/namespaces/e2e-tests-statefulset-fx745/pods/ss-1: read tcp 172.17.0.7:44889->104.198.128.145:443: read: connection timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:939
Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Jan 9 15:16:27.355: Couldn't delete ns: "e2e-tests-deployment-29pq0": namespace e2e-tests-deployment-29pq0 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-deployment-29pq0 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354
Issues about this test specifically: #28067 #28378 #32692 #33256 #34654
Failed: [k8s.io] Deployment deployment reaping should cascade to its replica sets and pods {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
<errors.aggregate | len:1, cap:1>: [
{
FailureType: 1,
ResourceVersion: "12883",
ActualError: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server cannot complete the requested operation at this time, try again later (put replicasets.extensions test-new-deployment-3469597609)",
Reason: "ServerTimeout",
Details: {
Name: "test-new-deployment-3469597609",
Group: "extensions",
Kind: "replicasets",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "{\"ErrStatus\":{\"metadata\":{},\"status\":\"Failure\",\"message\":\"The operation against could not be completed at this time, please try again.\",\"reason\":\"ServerTimeout\",\"details\":{},\"code\":500}}",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 504,
},
},
},
]
Scaling the resource failed with: the server cannot complete the requested operation at this time, try again later (put replicasets.extensions test-new-deployment-3469597609); Current resource version 12883
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:189
Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Jan 9 15:18:11.668: Couldn't delete ns: "e2e-tests-init-container-dhcdq": the server has asked for the client to provide credentials (get secrets) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server has asked for the client to provide credentials (get secrets)", Reason:"Unauthorized", Details:(*v1.StatusDetails)(0xc4206bb7c0), Code:401}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354
Issues about this test specifically: #31936
Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:371
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.128.145 --kubeconfig=/workspace/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zct6f] [] 0xc420f30100 warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when stopping \"STDIN\": Scaling the resource failed with: the server cannot complete the requested operation at this time, try again later (put replicasets.extensions redis-slave-3611072960); Current resource version 5183\n [] <nil> 0xc420ee66f0 exit status 1 <nil> <nil> true [0xc420036038 0xc420036148 0xc420036158] [0xc420036038 0xc420036148 0xc420036158] [0xc420036080 0xc420036140 0xc420036150] [0xbf4fd0 0xbf50d0 0xbf50d0] 0xc420d58420 <nil>}:\nCommand stdout:\n\nstderr:\nwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\nerror: error when stopping \"STDIN\": Scaling the resource failed with: the server cannot complete the requested operation at this time, try again later (put replicasets.extensions redis-slave-3611072960); Current resource version 5183\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.128.145 --kubeconfig=/workspace/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zct6f] [] 0xc420f30100 warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
error: error when stopping "STDIN": Scaling the resource failed with: the server cannot complete the requested operation at this time, try again later (put replicasets.extensions redis-slave-3611072960); Current resource version 5183
[] <nil> 0xc420ee66f0 exit status 1 <nil> <nil> true [0xc420036038 0xc420036148 0xc420036158] [0xc420036038 0xc420036148 0xc420036158] [0xc420036080 0xc420036140 0xc420036150] [0xbf4fd0 0xbf50d0 0xbf50d0] 0xc420d58420 <nil>}:
Command stdout:
stderr:
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
error: error when stopping "STDIN": Scaling the resource failed with: the server cannot complete the requested operation at this time, try again later (put replicasets.extensions redis-slave-3611072960); Current resource version 5183
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2044
Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774
Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:54
Expected error:
<*errors.errorString | 0xc420feaac0>: {
s: "failed to get logs from client-containers-a5cbe50c-d6bf-11e6-a311-0242ac110007 for test-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods client-containers-a5cbe50c-d6bf-11e6-a311-0242ac110007)",
}
failed to get logs from client-containers-a5cbe50c-d6bf-11e6-a311-0242ac110007 for test-container: an error on the server ("unknown") has prevented the request from succeeding (get pods client-containers-a5cbe50c-d6bf-11e6-a311-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2144
Issues about this test specifically: #29994
Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should handle in-cluster config {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:619
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.128.145 --kubeconfig=/workspace/.kube/config cp kubectl e2e-tests-kubectl-7lthk/nginx:/] [] <nil> Unable to connect to the server: unexpected EOF\n [] <nil> 0xc4215d88a0 exit status 1 <nil> <nil> true [0xc420488228 0xc420488240 0xc420488258] [0xc420488228 0xc420488240 0xc420488258] [0xc420488238 0xc420488250] [0xbf50d0 0xbf50d0] 0xc420dcaf00 <nil>}:\nCommand stdout:\n\nstderr:\nUnable to connect to the server: unexpected EOF\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.128.145 --kubeconfig=/workspace/.kube/config cp kubectl e2e-tests-kubectl-7lthk/nginx:/] [] <nil> Unable to connect to the server: unexpected EOF
[] <nil> 0xc4215d88a0 exit status 1 <nil> <nil> true [0xc420488228 0xc420488240 0xc420488258] [0xc420488228 0xc420488240 0xc420488258] [0xc420488238 0xc420488250] [0xbf50d0 0xbf50d0] 0xc420dcaf00 <nil>}:
Command stdout:
stderr:
Unable to connect to the server: unexpected EOF
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2044
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/2956/ Multiple broken tests:
Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:62
Expected error:
<*errors.StatusError | 0xc420cf4400>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-2f8a02b0-jjld:10250/logs/'\") has prevented the request from succeeding",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Error: 'EOF'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-2f8a02b0-jjld:10250/logs/'",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 503,
},
}
an error on the server ("Error: 'EOF'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-2f8a02b0-jjld:10250/logs/'") has prevented the request from succeeding
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:326
Issues about this test specifically: #36242
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Expected error:
<*errors.StatusError | 0xc420e28180>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Error: 'EOF'\\nTrying to reach: 'http://10.72.5.23:8080/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10'\") has prevented the request from succeeding (post services rc-light-ctrl)",
Reason: "InternalError",
Details: {
Name: "rc-light-ctrl",
Group: "",
Kind: "services",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Error: 'EOF'\nTrying to reach: 'http://10.72.5.23:8080/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10'",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 503,
},
}
an error on the server ("Error: 'EOF'\nTrying to reach: 'http://10.72.5.23:8080/BumpMetric?delta=0&durationSec=30&metric=QPS&requestSizeMetrics=10'") has prevented the request from succeeding (post services rc-light-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:245
Issues about this test specifically: #27196 #28998 #32403 #33341
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Jan 11 00:23:48.863: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.72.8.27 8081
retrieved map[]
expected map[netserver-3:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:269
Issues about this test specifically: #35283 #36867
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Jan 11 00:27:50.506: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.72.8.53:8080/hostName
retrieved map[]
expected map[netserver-7:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:269
Issues about this test specifically: #33631 #33995 #34970
Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
<*errors.errorString | 0xc420bdb6e0>: {
s: "failed to wait for pods responding: timed out waiting for the condition",
}
failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1003
Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458
Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
<*errors.errorString | 0xc42095e1f0>: {
s: "failed to wait for pods responding: timed out waiting for the condition",
}
failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:889
Issues about this test specifically: #29629 #36270 #37462
Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
<*errors.errorString | 0xc421330410>: {
s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-11 00:26:02 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-11 00:26:33 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-11 00:26:02 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.72.8.67 StartTime:2017-01-11 00:26:02 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2017-01-11 00:26:03 -0800 PST,FinishedAt:2017-01-11 00:26:33 -0800 PST,ContainerID:docker://3dd21449ddfe0b404114f07b763abf259b6ce065f3d5565ce089bf076c19580a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://3dd21449ddfe0b404114f07b763abf259b6ce065f3d5565ce089bf076c19580a}] QOSClass:BestEffort}",
}
pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-11 00:26:02 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-11 00:26:33 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-11 00:26:02 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.3 PodIP:10.72.8.67 StartTime:2017-01-11 00:26:02 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2017-01-11 00:26:03 -0800 PST,FinishedAt:2017-01-11 00:26:33 -0800 PST,ContainerID:docker://3dd21449ddfe0b404114f07b763abf259b6ce065f3d5565ce089bf076c19580a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://3dd21449ddfe0b404114f07b763abf259b6ce065f3d5565ce089bf076c19580a}] QOSClass:BestEffort}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48
Issues about this test specifically: #26171 #28188
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Jan 11 00:31:55.395: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.5.34:8080/dial?request=hostName&protocol=udp&host=10.72.8.33&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:216
Issues about this test specifically: #32830
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/2973/ Multiple broken tests:
Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:245
Jan 11 08:57:09.710: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:179
Issues about this test specifically: #26955
Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Jan 11 08:57:09.705: Couldn't delete ns: "e2e-tests-kubelet-w0p6n": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubelet-w0p6n/configmaps\": Post https://test-container.sandbox.googleapis.com/v1/masterProjects/365122390874/zones/us-central1-f/438315026343/bootstrap-e2e/authorize: read tcp 10.240.0.26:39414->74.125.69.81:443: read: connection reset by peer") has prevented the request from succeeding (get configmaps) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubelet-w0p6n/configmaps\\\": Post https://test-container.sandbox.googleapis.com/v1/masterProjects/365122390874/zones/us-central1-f/438315026343/bootstrap-e2e/authorize: read tcp 10.240.0.26:39414->74.125.69.81:443: read: connection reset by peer\") has prevented the request from succeeding (get configmaps)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc420fd5c20), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354
Issues about this test specifically: #28106 #35197 #37482
Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:498
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.157.34 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-fvl42 run -i --image=gcr.io/google_containers/busybox:1.24 --restart=Never failure-4 --leave-stdin-open -- /bin/sh -c exit 42] [] 0xc4209923e0 Error from server (InternalError): an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-fvl42/pods/failure-4/log?container=failure-4\\\": Post https://test-container.sandbox.googleapis.com/v1/masterProjects/365122390874/zones/us-central1-f/438315026343/bootstrap-e2e/authorize: read tcp 10.240.0.26:39414->74.125.69.81:443: read: connection reset by peer\") has prevented the request from succeeding (get pods failure-4)\n [] <nil> 0xc42075b320 exit status 1 <nil> <nil> true [0xc420036aa0 0xc420036ac8 0xc420036ad8] [0xc420036aa0 0xc420036ac8 0xc420036ad8] [0xc420036aa8 0xc420036ac0 0xc420036ad0] [0xbf7940 0xbf7a40 0xbf7a40] 0xc420c31f20 <nil>}:\nCommand stdout:\n\nstderr:\nError from server (InternalError): an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-fvl42/pods/failure-4/log?container=failure-4\\\": Post https://test-container.sandbox.googleapis.com/v1/masterProjects/365122390874/zones/us-central1-f/438315026343/bootstrap-e2e/authorize: read tcp 10.240.0.26:39414->74.125.69.81:443: read: connection reset by peer\") has prevented the request from succeeding (get pods failure-4)\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.157.34 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-fvl42 run -i --image=gcr.io/google_containers/busybox:1.24 --restart=Never failure-4 --leave-stdin-open -- /bin/sh -c exit 42] [] 0xc4209923e0 Error from server (InternalError): an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-fvl42/pods/failure-4/log?container=failure-4\": Post https://test-container.sandbox.googleapis.com/v1/masterProjects/365122390874/zones/us-central1-f/438315026343/bootstrap-e2e/authorize: read tcp 10.240.0.26:39414->74.125.69.81:443: read: connection reset by peer") has prevented the request from succeeding (get pods failure-4)
[] <nil> 0xc42075b320 exit status 1 <nil> <nil> true [0xc420036aa0 0xc420036ac8 0xc420036ad8] [0xc420036aa0 0xc420036ac8 0xc420036ad8] [0xc420036aa8 0xc420036ac0 0xc420036ad0] [0xbf7940 0xbf7a40 0xbf7a40] 0xc420c31f20 <nil>}:
Command stdout:
stderr:
Error from server (InternalError): an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-fvl42/pods/failure-4/log?container=failure-4\": Post https://test-container.sandbox.googleapis.com/v1/masterProjects/365122390874/zones/us-central1-f/438315026343/bootstrap-e2e/authorize: read tcp 10.240.0.26:39414->74.125.69.81:443: read: connection reset by peer") has prevented the request from succeeding (get pods failure-4)
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:497
Issues about this test specifically: #31151 #35586
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663
Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:143
Jan 11 08:57:09.705: Couldn't delete ns: "e2e-tests-secrets-wj55b": an error on the server ("Internal Server Error: \"/apis/apps/v1beta1/namespaces/e2e-tests-secrets-wj55b/statefulsets\": Post https://test-container.sandbox.googleapis.com/v1/masterProjects/365122390874/zones/us-central1-f/438315026343/bootstrap-e2e/authorize: read tcp 10.240.0.26:39414->74.125.69.81:443: read: connection reset by peer") has prevented the request from succeeding (get statefulsets.apps) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1beta1/namespaces/e2e-tests-secrets-wj55b/statefulsets\\\": Post https://test-container.sandbox.googleapis.com/v1/masterProjects/365122390874/zones/us-central1-f/438315026343/bootstrap-e2e/authorize: read tcp 10.240.0.26:39414->74.125.69.81:443: read: connection reset by peer\") has prevented the request from succeeding (get statefulsets.apps)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc420e2eeb0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:354
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3229/ Multiple broken tests:
Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1009
Expected error:
<*errors.errorString | 0xc42033f520>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3827
Issues about this test specifically: #37274
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
<*errors.errorString | 0xc42044bf30>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550
Issues about this test specifically: #33631 #33995 #34970
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
<*errors.errorString | 0xc420383510>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550
Issues about this test specifically: #32830
Failed: [k8s.io] EmptyDir wrapper volumes should not conflict [Volume] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
<*errors.errorString | 0xc420375af0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:68
Failed: [k8s.io] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:187
Waiting for pods in namespace "e2e-tests-disruption-1rf9n" to be ready
Expected error:
<*errors.errorString | 0xc42035bfc0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/disruption.go:257
Issues about this test specifically: #32644
Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
<*errors.errorString | 0xc420383dc0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550
Issues about this test specifically: #32436 #37267
Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 16 09:31:18.238: Couldn't delete ns: "e2e-tests-disruption-x8ggr": namespace e2e-tests-disruption-x8ggr was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0 (&errors.errorString{s:"namespace e2e-tests-disruption-x8ggr was not deleted with limit: timed out waiting for the condition, pods remaining: 1, pods missing deletion timestamp: 0"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:353
Issues about this test specifically: #32668 #35405
Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] [Volume] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:59
wait for pod "downwardapi-volume-aa39b731-dc10-11e6-8f67-0242ac110009" to disappear
Expected success, but got an error:
<*errors.errorString | 0xc420385030>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3330/ Multiple broken tests:
Failed: [k8s.io] Multi-AZ Clusters should spread the pods of a service across zones {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ubernetes_lite.go:54
Pods were not evenly spread across zones. 1 in one zone and 3 in another zone
Expected
<int>: 1
to be ~
<int>: 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ubernetes_lite.go:186
Issues about this test specifically: #34122
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
<*errors.errorString | 0xc420369780>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550
Issues about this test specifically: #32830
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877
Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:363
Expected error:
<*errors.errorString | 0xc4203772a0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:247
Issues about this test specifically: #26194 #26338 #30345 #34571
Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
<*errors.errorString | 0xc42033ebe0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550
Issues about this test specifically: #32436 #37267
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
<*errors.errorString | 0xc420332e20>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550
Issues about this test specifically: #32375
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3603/ Multiple broken tests:
Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1154
Jan 24 05:53:47.837: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout:
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1092
Issues about this test specifically: #26172
Failed: [k8s.io] Cluster level logging using GCL should check that logs from containers are ingested in GCL {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:72
Some log lines are still missing
Expected
<int>: 100
to equal
<int>: 0
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_logging_gcl.go:71
Issues about this test specifically: #34623 #34713 #36890 #37012 #37241
Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:372
Jan 24 05:56:02.736: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1644
Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774
Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:512
Expected error:
<*errors.errorString | 0xc420310d40>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:230
Issues about this test specifically: #32584
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877
Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:411
Expected error:
<*errors.errorString | 0xc42038fb70>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:230
Issues about this test specifically: #26168 #27450
Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:448
Expected error:
<*errors.errorString | 0xc4203e70c0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:230
Issues about this test specifically: #28337
Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
<*errors.errorString | 0xc420ccf170>: {
s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-24 05:47:27 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-24 05:47:58 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-24 05:47:27 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.72.1.31 StartTime:2017-01-24 05:47:27 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2017-01-24 05:47:28 -0800 PST,FinishedAt:2017-01-24 05:47:58 -0800 PST,ContainerID:docker://7e29b437101c5318db64db9459b69a1d049ab3a124f1fd0863df01a114d9825b,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://7e29b437101c5318db64db9459b69a1d049ab3a124f1fd0863df01a114d9825b}] QOSClass:BestEffort}",
}
pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-24 05:47:27 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-24 05:47:58 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-24 05:47:27 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.72.1.31 StartTime:2017-01-24 05:47:27 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2017-01-24 05:47:28 -0800 PST,FinishedAt:2017-01-24 05:47:58 -0800 PST,ContainerID:docker://7e29b437101c5318db64db9459b69a1d049ab3a124f1fd0863df01a114d9825b,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://7e29b437101c5318db64db9459b69a1d049ab3a124f1fd0863df01a114d9825b}] QOSClass:BestEffort}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48
Issues about this test specifically: #26171 #28188
Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:363
Expected error:
<*errors.errorString | 0xc42036d2d0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:230
Issues about this test specifically: #26194 #26338 #30345 #34571
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:693
Jan 24 05:48:14.656: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:690
Issues about this test specifically: #28420 #36122
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Jan 24 06:02:25.151: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:303
Issues about this test specifically: #27443 #27835 #28900 #32512 #38549
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3606/ Multiple broken tests:
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
<*url.Error | 0xc420f459b0>: {
Op: "Post",
URL: "https://104.154.205.217/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-x04dq/services/rc-light-ctrl/proxy/ConsumeCPU?durationSec=30&millicores=150&requestSizeMillicores=20",
Err: {},
}
Post https://104.154.205.217/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-x04dq/services/rc-light-ctrl/proxy/ConsumeCPU?durationSec=30&millicores=150&requestSizeMillicores=20: context deadline exceeded
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:220
Issues about this test specifically: #27443 #27835 #28900 #32512 #38549
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Jan 24 08:01:24.994: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.0.62:8080/dial?request=hostName&protocol=udp&host=10.72.5.40&port=8081&tries=1'
retrieved map[]
expected map[netserver-2:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:216
Issues about this test specifically: #32830
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Jan 24 08:09:47.284: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.72.1.80:8080/dial?request=hostName&protocol=http&host=10.72.5.45&port=8080&tries=1'
retrieved map[]
expected map[netserver-3:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:216
Issues about this test specifically: #32375
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877
Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:318
Expected error:
<*errors.errorString | 0xc420b680a0>: {
s: "service verification failed for: 10.75.253.255\nexpected [service1-r8z54 service1-rwlnt service1-tzvbt]\nreceived [wget: download timed out]",
}
service verification failed for: 10.75.253.255
expected [service1-r8z54 service1-rwlnt service1-tzvbt]
received [wget: download timed out]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:290
Issues about this test specifically: #26128 #26685 #33408 #36298
Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
<*errors.errorString | 0xc420c7a850>: {
s: "failed to wait for pods responding: timed out waiting for the condition",
}
failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:889
Issues about this test specifically: #29629 #36270 #37462
Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
<*errors.errorString | 0xc420ce6290>: {
s: "failed to wait for pods responding: timed out waiting for the condition",
}
failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:408
Issues about this test specifically: #28339 #36379
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Jan 24 07:55:12.393: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.72.5.42:8080/hostName
retrieved map[]
expected map[netserver-2:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:269
Issues about this test specifically: #33631 #33995 #34970
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Jan 24 08:00:01.043: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:303
Issues about this test specifically: #27196 #28998 #32403 #33341
Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
<*errors.errorString | 0xc420d80c30>: {
s: "failed to wait for pods responding: timed out waiting for the condition",
}
failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1003
Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3712/ Multiple broken tests:
Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:262
Failed to observe pod deletion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:251
Issues about this test specifically: #26224 #34354
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483
Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Jan 26 17:04:46.329: Couldn't delete ns: "e2e-tests-job-zg4bz": namespace e2e-tests-job-zg4bz was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-job-zg4bz was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:276
Issues about this test specifically: #29066 #30592 #31065 #33171
Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
<*errors.errorString | 0xc42036bbb0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154
Issues about this test specifically: #32023
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3718/ Multiple broken tests:
Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:48
Expected error:
<*errors.errorString | 0xc420333040>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:141
Issues about this test specifically: #32087
Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:262
Failed to observe pod deletion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:251
Issues about this test specifically: #26224 #34354
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483
Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Jan 26 20:19:01.150: Couldn't delete ns: "e2e-tests-job-nhv61": namespace e2e-tests-job-nhv61 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed (&errors.errorString{s:"namespace e2e-tests-job-nhv61 was not deleted with limit: timed out waiting for the condition, namespace is empty but is not yet removed"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:276
Issues about this test specifically: #29511 #29987 #30238 #38364
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3723/ Multiple broken tests:
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483
Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:262
Failed to observe pod deletion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:251
Issues about this test specifically: #26224 #34354
Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
<*errors.errorString | 0xc420409b00>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154
Issues about this test specifically: #32023
Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:48
Expected error:
<*errors.errorString | 0xc4203c2360>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:141
Issues about this test specifically: #32087
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3729/ Multiple broken tests:
Failed: [k8s.io] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:192
Failed to observe pod deletion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:180
Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:262
Failed to observe pod deletion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:251
Issues about this test specifically: #26224 #34354
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483
Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:48
Expected error:
<*errors.errorString | 0xc42031c010>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:141
Issues about this test specifically: #32087
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3750/ Multiple broken tests:
Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:71
Expected error:
<*errors.errorString | 0xc420d812c0>: {
s: "expected pod \"var-expansion-8752ecb1-e4e9-11e6-9564-0242ac110005\" success: gave up waiting for pod 'var-expansion-8752ecb1-e4e9-11e6-9564-0242ac110005' to be 'success or failure' after 5m0s",
}
expected pod "var-expansion-8752ecb1-e4e9-11e6-9564-0242ac110005" success: gave up waiting for pod 'var-expansion-8752ecb1-e4e9-11e6-9564-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2103
Issues about this test specifically: #29461
Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:103
Expected error:
<*errors.errorString | 0xc420c5e0d0>: {
s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:20, Replicas:11, UpdatedReplicas:11, ReadyReplicas:10, AvailableReplicas:10, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63621157031, nsec:0, loc:(*time.Location)(0x3bb1940)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63621157031, nsec:0, loc:(*time.Location)(0x3bb1940)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}, v1beta1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{sec:63621157042, nsec:0, loc:(*time.Location)(0x3bb1940)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63621156983, nsec:0, loc:(*time.Location)(0x3bb1940)}}, Reason:\"NewReplicaSetAvailable\", Message:\"ReplicaSet \\\"nginx-767029305\\\" has successfully progressed.\"}}}",
}
error waiting for deployment "nginx" status to match expectation: deployment status: v1beta1.DeploymentStatus{ObservedGeneration:20, Replicas:11, UpdatedReplicas:11, ReadyReplicas:10, AvailableReplicas:10, UnavailableReplicas:1, Conditions:[]v1beta1.DeploymentCondition{v1beta1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63621157031, nsec:0, loc:(*time.Location)(0x3bb1940)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63621157031, nsec:0, loc:(*time.Location)(0x3bb1940)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1beta1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{sec:63621157042, nsec:0, loc:(*time.Location)(0x3bb1940)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63621156983, nsec:0, loc:(*time.Location)(0x3bb1940)}}, Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"nginx-767029305\" has successfully progressed."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1401
Issues about this test specifically: #36265 #36353 #36628
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483
Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:63
Expected error:
<*errors.errorString | 0xc4208cf0e0>: {
s: "expected pod \"downward-api-87d30fcf-e4e9-11e6-b53f-0242ac110005\" success: gave up waiting for pod 'downward-api-87d30fcf-e4e9-11e6-b53f-0242ac110005' to be 'success or failure' after 5m0s",
}
expected pod "downward-api-87d30fcf-e4e9-11e6-b53f-0242ac110005" success: gave up waiting for pod 'downward-api-87d30fcf-e4e9-11e6-b53f-0242ac110005' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2103
Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
<*errors.errorString | 0xc420380f90>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154
Issues about this test specifically: #32023
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3797/ Multiple broken tests:
Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Jan 28 20:20:12.305: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-e223e9d1-gcm9"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341
Issues about this test specifically: #38516
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:126
Jan 28 20:20:04.876: All nodes should be ready after test, Not ready nodes: ", gke-bootstrap-e2e-default-pool-e223e9d1-gcm9"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:341
Issues about this test specifically: #27507 #28275 #38583
Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
<*errors.errorString | 0xc4203b7290>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154
Issues about this test specifically: #32023
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3916/ Multiple broken tests:
Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:100
Expected error:
<*errors.errorString | 0xc4208d8d50>: {
s: "deployment \"nginx\" never updated with the desired condition and reason: [{Available False 2017-01-31 08:41:18.117701263 -0800 PST 2017-01-31 08:41:18.11770151 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-01-31 08:41:18.545069262 -0800 PST 2017-01-31 08:41:18.108968657 -0800 PST ReplicaSetUpdated ReplicaSet \"nginx-1638191467\" is progressing.}]",
}
deployment "nginx" never updated with the desired condition and reason: [{Available False 2017-01-31 08:41:18.117701263 -0800 PST 2017-01-31 08:41:18.11770151 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-01-31 08:41:18.545069262 -0800 PST 2017-01-31 08:41:18.108968657 -0800 PST ReplicaSetUpdated ReplicaSet "nginx-1638191467" is progressing.}]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1259
Issues about this test specifically: #31697 #36574 #39785
Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
<*errors.errorString | 0xc4203b8200>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154
Issues about this test specifically: #32023
Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:48
Expected error:
<*errors.errorString | 0xc420381930>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:141
Issues about this test specifically: #32087
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3951/ Multiple broken tests:
Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
<*errors.errorString | 0xc4203100e0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154
Issues about this test specifically: #32023
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668
Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:100
Expected error:
<*errors.errorString | 0xc4207890d0>: {
s: "deployment \"nginx\" never updated with the desired condition and reason: [{Available False 2017-02-01 03:17:20.631810984 -0800 PST 2017-02-01 03:17:20.631811461 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-01 03:17:21.51026651 -0800 PST 2017-02-01 03:17:20.618072613 -0800 PST ReplicaSetUpdated ReplicaSet \"nginx-1638191467\" is progressing.}]",
}
deployment "nginx" never updated with the desired condition and reason: [{Available False 2017-02-01 03:17:20.631810984 -0800 PST 2017-02-01 03:17:20.631811461 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-01 03:17:21.51026651 -0800 PST 2017-02-01 03:17:20.618072613 -0800 PST ReplicaSetUpdated ReplicaSet "nginx-1638191467" is progressing.}]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1259
Issues about this test specifically: #31697 #36574 #39785
Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:48
Expected error:
<*errors.errorString | 0xc420437090>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:141
Issues about this test specifically: #32087
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3957/ Multiple broken tests:
Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:48
Expected error:
<*errors.errorString | 0xc4203e7440>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:141
Issues about this test specifically: #32087
Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:100
Expected error:
<*errors.errorString | 0xc420f42640>: {
s: "deployment \"nginx\" never updated with the desired condition and reason: [{Available False 2017-02-01 06:25:20.025907616 -0800 PST 2017-02-01 06:25:20.025908016 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-01 06:25:20.0912865 -0800 PST 2017-02-01 06:25:20.015502731 -0800 PST ReplicaSetUpdated ReplicaSet \"nginx-1638191467\" is progressing.}]",
}
deployment "nginx" never updated with the desired condition and reason: [{Available False 2017-02-01 06:25:20.025907616 -0800 PST 2017-02-01 06:25:20.025908016 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-01 06:25:20.0912865 -0800 PST 2017-02-01 06:25:20.015502731 -0800 PST ReplicaSetUpdated ReplicaSet "nginx-1638191467" is progressing.}]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1259
Issues about this test specifically: #31697 #36574 #39785
Failed: [k8s.io] Multi-AZ Clusters should spread the pods of a replication controller across zones {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ubernetes_lite.go:58
Pods were not evenly spread across zones. 1 in one zone and 3 in another zone
Expected
<int>: 1
to be ~
<int>: 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/ubernetes_lite.go:186
Issues about this test specifically: #34247
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3990/ Multiple broken tests:
Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] [Volume] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:197
wait for pod "downwardapi-volume-3c6a6f3f-e91b-11e6-9462-0242ac11000a" to disappear
Expected success, but got an error:
<*errors.errorString | 0xc420322290>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:797
Feb 1 23:49:16.004: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:299
Issues about this test specifically: #28774 #31429
Failed: [k8s.io] Networking should check kube-proxy urls {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:84
Expected error:
<*errors.errorString | 0xc420340a20>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:550
Issues about this test specifically: #32436 #37267
Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:194
waiting for tester pod to start
Expected error:
<*errors.errorString | 0xc4203c3f50>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:115
Issues about this test specifically: #30287 #35953
Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:130
wait for pod "downward-api-3cef9b51-e91b-11e6-8df3-0242ac11000a" to disappear
Expected success, but got an error:
<*errors.errorString | 0xc4204599c0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:122
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:876
Feb 1 23:49:07.397: Verified 0 of 1 pods , error : timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:299
Issues about this test specifically: #26209 #29227 #32132 #37516
Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:122
starting pod liveness-exec in namespace e2e-tests-container-probe-6bqh3
Expected error:
<*errors.errorString | 0xc42036f080>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:365
Issues about this test specifically: #30264
Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:76
Expected error:
<*errors.errorString | 0xc4209d4040>: {
s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
}
failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:478
Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879
Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
<*errors.errorString | 0xc4203759e0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154
Issues about this test specifically: #32023
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/3994/ Multiple broken tests:
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668
Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:48
Expected error:
<*errors.errorString | 0xc420375fb0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:141
Issues about this test specifically: #32087
Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
<*errors.errorString | 0xc4203b8610>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154
Issues about this test specifically: #32023
Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:100
Expected error:
<*errors.errorString | 0xc420a0ae00>: {
s: "deployment \"nginx\" never updated with the desired condition and reason: [{Available False 2017-02-02 02:07:29.267709191 -0800 PST 2017-02-02 02:07:29.267709483 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-02 02:07:29.293285753 -0800 PST 2017-02-02 02:07:29.260890088 -0800 PST ReplicaSetUpdated ReplicaSet \"nginx-1638191467\" is progressing.}]",
}
deployment "nginx" never updated with the desired condition and reason: [{Available False 2017-02-02 02:07:29.267709191 -0800 PST 2017-02-02 02:07:29.267709483 -0800 PST MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2017-02-02 02:07:29.293285753 -0800 PST 2017-02-02 02:07:29.260890088 -0800 PST ReplicaSetUpdated ReplicaSet "nginx-1638191467" is progressing.}]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1259
Issues about this test specifically: #31697 #36574 #39785
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/4025/ Multiple broken tests:
Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
<*errors.errorString | 0xc4203b9c60>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:154
Issues about this test specifically: #32023
Failed: Test {e2e.go}
exit status 1
Issues about this test specifically: #33361 #38663 #39788 #39877 #40371 #40469 #40478 #40483 #40668
Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/statefulset.go:274
Test Panicked
/usr/local/go/src/runtime/panic.go:458
Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:48
Expected error:
<*errors.errorString | 0xc42034df40>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rc.go:141
Issues about this test specifically: #32087
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-multizone/978/
Run so broken it didn't make JUnit output!