Closed k8s-github-robot closed 7 years ago
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/2/
Multiple broken tests:
Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:97
Timed out after 45.000s.
Expected
<string>: content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
to contain substring
<string>: key3="value3"
Issues about this test specifically: #28416 #31055
Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
<*errors.errorString | 0xc821cd84a0>: {
s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-09-22 17:30:43 -0700 PDT} FinishedAt:{Time:2016-09-22 17:30:53 -0700 PDT} ContainerID:docker://5b4560f07a3e5f5a077876175cf3d75fbb43c0aaac179a23ddbf9b9fc87773a3}",
}
pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-09-22 17:30:43 -0700 PDT} FinishedAt:{Time:2016-09-22 17:30:53 -0700 PDT} ContainerID:docker://5b4560f07a3e5f5a077876175cf3d75fbb43c0aaac179a23ddbf9b9fc87773a3}
not to have occurred
Issues about this test specifically: #30131 #31402
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 22 18:02:20.552: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-24ef4159-bsj4:
container "runtime": expected RSS memory (MB) < 157286400; got 250593280
node gke-jenkins-e2e-default-pool-24ef4159-fx6v:
container "runtime": expected RSS memory (MB) < 157286400; got 249638912
node gke-jenkins-e2e-default-pool-24ef4159-zkh8:
container "runtime": expected RSS memory (MB) < 157286400; got 239046656
Issues about this test specifically: #28220 #32942
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 22 18:57:25.488: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-24ef4159-fx6v:
container "runtime": expected RSS memory (MB) < 89128960; got 107053056
node gke-jenkins-e2e-default-pool-24ef4159-zkh8:
container "runtime": expected RSS memory (MB) < 89128960; got 89636864
Issues about this test specifically: #26784 #28384 #31935 #33023
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 22 15:29:03.600: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-24ef4159-zkh8:
container "runtime": expected RSS memory (MB) < 314572800; got 531472384
node gke-jenkins-e2e-default-pool-24ef4159-bsj4:
container "runtime": expected RSS memory (MB) < 314572800; got 525840384
node gke-jenkins-e2e-default-pool-24ef4159-fx6v:
container "runtime": expected RSS memory (MB) < 314572800; got 520368128
Issues about this test specifically: #26982 #32214
Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
<*errors.errorString | 0xc82391a090>: {
s: "service verification failed for: 10.183.245.49\nexpected [service3-aip1m service3-g848x service3-lb6rw]\nreceived [service3-aip1m service3-g848x]",
}
service verification failed for: 10.183.245.49
expected [service3-aip1m service3-g848x service3-lb6rw]
received [service3-aip1m service3-g848x]
not to have occurred
Issues about this test specifically: #26128 #26685
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/4/
Multiple broken tests:
Failed: Test {e2e.go}
error running Ginkgo tests: exit status 1
Issues about this test specifically: #33361
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 23 04:00:04.743: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-72c426c7-xq46:
container "runtime": expected RSS memory (MB) < 314572800; got 515686400
node gke-jenkins-e2e-default-pool-72c426c7-30og:
container "runtime": expected RSS memory (MB) < 314572800; got 527376384
node gke-jenkins-e2e-default-pool-72c426c7-e9tv:
container "runtime": expected RSS memory (MB) < 314572800; got 527110144
Issues about this test specifically: #26982 #32214
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 23 04:23:27.481: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-72c426c7-e9tv:
container "runtime": expected RSS memory (MB) < 157286400; got 244809728
node gke-jenkins-e2e-default-pool-72c426c7-xq46:
container "runtime": expected RSS memory (MB) < 157286400; got 232890368
node gke-jenkins-e2e-default-pool-72c426c7-30og:
container "runtime": expected RSS memory (MB) < 157286400; got 245755904
Issues about this test specifically: #28220 #32942
Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
<*errors.errorString | 0xc8218f7c60>: {
s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-09-23 06:16:10 -0700 PDT} FinishedAt:{Time:2016-09-23 06:16:20 -0700 PDT} ContainerID:docker://75f48a2cd47e8e5f6dd840f92a3927599bc181427749820e6ecff47656490961}",
}
pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-09-23 06:16:10 -0700 PDT} FinishedAt:{Time:2016-09-23 06:16:20 -0700 PDT} ContainerID:docker://75f48a2cd47e8e5f6dd840f92a3927599bc181427749820e6ecff47656490961}
not to have occurred
Issues about this test specifically: #30131 #31402
Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:134
Timed out after 45.001s.
Expected
<string>: content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T14:09:07.838449792Z"
kubernetes.io/config.source="api"
to contain substring
<string>: builder="foo"
Issues about this test specifically: #28462
Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
<*errors.errorString | 0xc8219594c0>: {
s: "service verification failed for: 10.183.247.2\nexpected [service3-f30t9 service3-ng4n5 service3-x9iuk]\nreceived [service3-f30t9 service3-x9iuk]",
}
service verification failed for: 10.183.247.2
expected [service3-f30t9 service3-ng4n5 service3-x9iuk]
received [service3-f30t9 service3-x9iuk]
not to have occurred
Issues about this test specifically: #26128 #26685
Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:97
Timed out after 45.000s.
Expected
<string>: content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
to contain substring
<string>: key3="value3"
Issues about this test specifically: #28416 #31055
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/5/
Multiple broken tests:
Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
<*errors.errorString | 0xc8200e20c0>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
Issues about this test specifically: #26490
Failed: Test {e2e.go}
error running Ginkgo tests: exit status 1
Issues about this test specifically: #33361
Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:97
Timed out after 45.001s.
Expected
<string>: content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
to contain substring
<string>: key3="value3"
Issues about this test specifically: #28416 #31055
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 23 08:40:45.648: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-f6570b3f-a40e:
container "runtime": expected RSS memory (MB) < 314572800; got 524308480
node gke-jenkins-e2e-default-pool-f6570b3f-p7e9:
container "runtime": expected RSS memory (MB) < 314572800; got 523563008
node gke-jenkins-e2e-default-pool-f6570b3f-y5d6:
container "runtime": expected RSS memory (MB) < 314572800; got 514650112
Issues about this test specifically: #26982 #32214
Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:134
Timed out after 45.001s.
Expected
<string>: content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T15:57:57.719548881Z"
kubernetes.io/config.source="api"
to contain substring
<string>: builder="foo"
Issues about this test specifically: #28462
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 23 10:10:31.173: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-f6570b3f-a40e:
container "runtime": expected RSS memory (MB) < 157286400; got 242982912
node gke-jenkins-e2e-default-pool-f6570b3f-p7e9:
container "runtime": expected RSS memory (MB) < 157286400; got 242913280
node gke-jenkins-e2e-default-pool-f6570b3f-y5d6:
container "runtime": expected RSS memory (MB) < 157286400; got 236126208
Issues about this test specifically: #28220 #32942
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/6/
Multiple broken tests:
Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
<*errors.errorString | 0xc82007df80>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
Issues about this test specifically: #26490
Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:134
Timed out after 45.001s.
Expected
<string>: content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-23T23:53:09.418330182Z"
kubernetes.io/config.source="api"
to contain substring
<string>: builder="foo"
Issues about this test specifically: #28462
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 23 17:59:07.708: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-d6cfe3d2-aaud:
container "runtime": expected RSS memory (MB) < 157286400; got 247504896
node gke-jenkins-e2e-default-pool-d6cfe3d2-j7sg:
container "runtime": expected RSS memory (MB) < 157286400; got 239083520
node gke-jenkins-e2e-default-pool-d6cfe3d2-6zyv:
container "runtime": expected RSS memory (MB) < 157286400; got 250183680
Issues about this test specifically: #28220 #32942
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 23 19:01:28.366: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-d6cfe3d2-aaud:
container "runtime": expected RSS memory (MB) < 89128960; got 101822464
node gke-jenkins-e2e-default-pool-d6cfe3d2-j7sg:
container "runtime": expected RSS memory (MB) < 89128960; got 91484160
Issues about this test specifically: #26784 #28384 #31935 #33023
Failed: Test {e2e.go}
error running Ginkgo tests: exit status 1
Issues about this test specifically: #33361
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 23 16:10:52.302: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-d6cfe3d2-6zyv:
container "runtime": expected RSS memory (MB) < 314572800; got 516489216
node gke-jenkins-e2e-default-pool-d6cfe3d2-aaud:
container "runtime": expected RSS memory (MB) < 314572800; got 543268864
node gke-jenkins-e2e-default-pool-d6cfe3d2-j7sg:
container "runtime": expected RSS memory (MB) < 314572800; got 533966848
Issues about this test specifically: #26982 #32214
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/7/
Multiple broken tests:
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 24 01:04:58.275: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-97993c0b-vmf6:
container "runtime": expected RSS memory (MB) < 157286400; got 245202944
node gke-jenkins-e2e-default-pool-97993c0b-w8tn:
container "runtime": expected RSS memory (MB) < 157286400; got 253321216
node gke-jenkins-e2e-default-pool-97993c0b-jxhj:
container "runtime": expected RSS memory (MB) < 157286400; got 233312256
Issues about this test specifically: #28220 #32942
Failed: Test {e2e.go}
error running Ginkgo tests: exit status 1
Issues about this test specifically: #33361
Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:97
Timed out after 45.000s.
Expected
<string>: content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
to contain substring
<string>: key3="value3"
Issues about this test specifically: #28416 #31055
Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:279
Expected error:
<*errors.errorString | 0xc821c73c00>: {
s: "service verification failed for: 10.183.247.106\nexpected [service3-00kot service3-7czd2 service3-ogfsd]\nreceived [service3-7czd2 service3-ogfsd]",
}
service verification failed for: 10.183.247.106
expected [service3-00kot service3-7czd2 service3-ogfsd]
received [service3-7czd2 service3-ogfsd]
not to have occurred
Issues about this test specifically: #26128 #26685 #33408
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 24 00:40:42.367: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-97993c0b-jxhj:
container "runtime": expected RSS memory (MB) < 314572800; got 514723840
node gke-jenkins-e2e-default-pool-97993c0b-vmf6:
container "runtime": expected RSS memory (MB) < 314572800; got 537960448
node gke-jenkins-e2e-default-pool-97993c0b-w8tn:
container "runtime": expected RSS memory (MB) < 314572800; got 535568384
Issues about this test specifically: #26982 #32214
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/8/
Multiple broken tests:
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 24 02:26:05.551: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-578746dc-n1c4:
container "runtime": expected RSS memory (MB) < 314572800; got 525373440
node gke-jenkins-e2e-default-pool-578746dc-wjrh:
container "runtime": expected RSS memory (MB) < 314572800; got 524828672
node gke-jenkins-e2e-default-pool-578746dc-3dxp:
container "runtime": expected RSS memory (MB) < 314572800; got 523399168
Issues about this test specifically: #26982 #32214
Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
<*errors.errorString | 0xc8200c3060>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
Issues about this test specifically: #26490
Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:97
Timed out after 45.000s.
Expected
<string>: content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
to contain substring
<string>: key3="value3"
Issues about this test specifically: #28416 #31055
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 24 06:46:21.510: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-578746dc-to9a:
container "runtime": expected RSS memory (MB) < 157286400; got 229335040
node gke-jenkins-e2e-default-pool-578746dc-wjrh:
container "runtime": expected RSS memory (MB) < 157286400; got 249688064
node gke-jenkins-e2e-default-pool-578746dc-857o:
container "runtime": expected RSS memory (MB) < 157286400; got 221421568
Issues about this test specifically: #28220 #32942
Failed: Test {e2e.go}
error running Ginkgo tests: exit status 1
Issues about this test specifically: #33361
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/9/
Multiple broken tests:
Failed: Test {e2e.go}
error running Ginkgo tests: exit status 1
Issues about this test specifically: #33361
Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
<*errors.errorString | 0xc82175a3d0>: {
s: "service verification failed for: 10.183.253.53\nexpected [service1-7sz4m service1-lcv6j service1-o8rvi]\nreceived [service1-7sz4m service1-o8rvi]",
}
service verification failed for: 10.183.253.53
expected [service1-7sz4m service1-lcv6j service1-o8rvi]
received [service1-7sz4m service1-o8rvi]
not to have occurred
Issues about this test specifically: #28257 #29159 #29449 #32447
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 24 12:11:02.187: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-7c1beda5-epzv:
container "runtime": expected RSS memory (MB) < 89128960; got 96817152
node gke-jenkins-e2e-default-pool-7c1beda5-pdu7:
container "runtime": expected RSS memory (MB) < 89128960; got 91652096
Issues about this test specifically: #26784 #28384 #31935 #33023
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 24 12:43:02.508: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-7c1beda5-1r7n:
container "runtime": expected RSS memory (MB) < 314572800; got 534065152
node gke-jenkins-e2e-default-pool-7c1beda5-epzv:
container "runtime": expected RSS memory (MB) < 314572800; got 531849216
node gke-jenkins-e2e-default-pool-7c1beda5-pdu7:
container "runtime": expected RSS memory (MB) < 314572800; got 535797760
Issues about this test specifically: #26982 #32214
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 24 13:44:36.646: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-7c1beda5-epzv:
container "runtime": expected RSS memory (MB) < 157286400; got 245071872
node gke-jenkins-e2e-default-pool-7c1beda5-pdu7:
container "runtime": expected RSS memory (MB) < 157286400; got 247697408
node gke-jenkins-e2e-default-pool-7c1beda5-1r7n:
container "runtime": expected RSS memory (MB) < 157286400; got 244490240
Issues about this test specifically: #28220 #32942
[FLAKE-PING] @rmmh
This flaky-test issue would love to have more attention.
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/10/
Multiple broken tests:
Failed: Test {e2e.go}
error running Ginkgo tests: exit status 1
Issues about this test specifically: #33361
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 25 13:42:28.973: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-a02ab5ef-0wo8:
container "runtime": expected RSS memory (MB) < 157286400; got 224370688
node gke-jenkins-e2e-default-pool-a02ab5ef-cpzl:
container "runtime": expected RSS memory (MB) < 157286400; got 223756288
node gke-jenkins-e2e-default-pool-a02ab5ef-yty5:
container "runtime": expected RSS memory (MB) < 157286400; got 228679680
Issues about this test specifically: #28220 #32942
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 25 14:58:12.629: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-a02ab5ef-cpzl:
container "runtime": expected RSS memory (MB) < 314572800; got 527503360
node gke-jenkins-e2e-default-pool-a02ab5ef-yty5:
container "runtime": expected RSS memory (MB) < 314572800; got 528470016
node gke-jenkins-e2e-default-pool-a02ab5ef-0wo8:
container "runtime": expected RSS memory (MB) < 314572800; got 527609856
Issues about this test specifically: #26982 #32214
Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:97
Timed out after 45.000s.
Expected
<string>: content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
to contain substring
<string>: key3="value3"
Issues about this test specifically: #28416 #31055
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/11/
Multiple broken tests:
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 25 23:39:03.230: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-2af353df-vsid:
container "runtime": expected RSS memory (MB) < 89128960; got 96886784
node gke-jenkins-e2e-default-pool-2af353df-ys21:
container "runtime": expected RSS memory (MB) < 89128960; got 98189312
Issues about this test specifically: #26784 #28384 #31935 #33023
Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:134
Timed out after 45.001s.
Expected
<string>: content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T07:22:36.148579725Z"
kubernetes.io/config.source="api"
to contain substring
<string>: builder="foo"
Issues about this test specifically: #28462
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 25 19:06:11.577: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-2af353df-ys21:
container "runtime": expected RSS memory (MB) < 157286400; got 232620032
node gke-jenkins-e2e-default-pool-2af353df-hatz:
container "runtime": expected RSS memory (MB) < 157286400; got 223203328
node gke-jenkins-e2e-default-pool-2af353df-vsid:
container "runtime": expected RSS memory (MB) < 157286400; got 218943488
Issues about this test specifically: #28220 #32942
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 25 20:28:56.426: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-2af353df-ys21:
container "runtime": expected RSS memory (MB) < 314572800; got 541769728
node gke-jenkins-e2e-default-pool-2af353df-hatz:
container "runtime": expected RSS memory (MB) < 314572800; got 530894848
node gke-jenkins-e2e-default-pool-2af353df-vsid:
container "runtime": expected RSS memory (MB) < 314572800; got 529805312
Issues about this test specifically: #26982 #32214
Failed: Test {e2e.go}
error running Ginkgo tests: exit status 1
Issues about this test specifically: #33361
Failed: [k8s.io] Downward API volume should update labels on modification [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:97
Timed out after 45.001s.
Expected
<string>: content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
content of file "/etc/labels": key1="value1"
key2="value2"
to contain substring
<string>: key3="value3"
Issues about this test specifically: #28416 #31055
Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication between nodes {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:267
Expected error:
<*errors.errorString | 0xc823c8ec50>: {
s: "pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-09-25 23:18:45 -0700 PDT} FinishedAt:{Time:2016-09-25 23:18:55 -0700 PDT} ContainerID:docker://751544a4c3d798025fff5d786fd1154fb6dbe1fbc85a3471fcea3336bcc23c3b}",
}
pod 'different-node-wget' terminated with failure: &{ExitCode:1 Signal:0 Reason:Error Message: StartedAt:{Time:2016-09-25 23:18:45 -0700 PDT} FinishedAt:{Time:2016-09-25 23:18:55 -0700 PDT} ContainerID:docker://751544a4c3d798025fff5d786fd1154fb6dbe1fbc85a3471fcea3336bcc23c3b}
not to have occurred
Issues about this test specifically: #30131 #31402
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/13/
Multiple broken tests:
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 26 10:58:45.838: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-3eb5ec6d-kmt6:
container "runtime": expected RSS memory (MB) < 314572800; got 536264704
node gke-jenkins-e2e-default-pool-3eb5ec6d-q9un:
container "runtime": expected RSS memory (MB) < 314572800; got 532979712
node gke-jenkins-e2e-default-pool-3eb5ec6d-qiiz:
container "runtime": expected RSS memory (MB) < 314572800; got 519655424
Issues about this test specifically: #26982 #32214
Failed: Test {e2e.go}
error running Ginkgo tests: exit status 1
Issues about this test specifically: #33361
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 26 11:23:04.765: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-3eb5ec6d-q9un:
container "runtime": expected RSS memory (MB) < 157286400; got 244264960
node gke-jenkins-e2e-default-pool-3eb5ec6d-qiiz:
container "runtime": expected RSS memory (MB) < 157286400; got 233820160
node gke-jenkins-e2e-default-pool-3eb5ec6d-kmt6:
container "runtime": expected RSS memory (MB) < 157286400; got 251334656
Issues about this test specifically: #28220 #32942
Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:331
Expected error:
<*errors.errorString | 0xc82131eac0>: {
s: "service verification failed for: 10.183.253.61\nexpected [service1-o1kd3 service1-q1jqx service1-y8smt]\nreceived [service1-o1kd3 service1-q1jqx]",
}
service verification failed for: 10.183.253.61
expected [service1-o1kd3 service1-q1jqx service1-y8smt]
received [service1-o1kd3 service1-q1jqx]
not to have occurred
Issues about this test specifically: #29514
Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:372
Expected error:
<*errors.errorString | 0xc82250c040>: {
s: "service verification failed for: 10.183.255.91\nexpected [service1-in9q8 service1-v3z2h service1-vipqi]\nreceived [service1-v3z2h service1-vipqi]",
}
service verification failed for: 10.183.255.91
expected [service1-in9q8 service1-v3z2h service1-vipqi]
received [service1-v3z2h service1-vipqi]
not to have occurred
Issues about this test specifically: #28257 #29159 #29449 #32447
Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
<*errors.errorString | 0xc82007df80>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
Issues about this test specifically: #26490
Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:134
Timed out after 45.000s.
Expected
<string>: content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-26T14:29:03.178031132Z"
kubernetes.io/config.source="api"
to contain substring
<string>: builder="foo"
Issues about this test specifically: #28462
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/14/
Multiple broken tests:
Failed: Test {e2e.go}
error running Ginkgo tests: exit status 1
Issues about this test specifically: #33361
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Expected error:
<*errors.errorString | 0xc82117e970>: {
s: "failed to wait for pods responding: timed out waiting for the condition",
}
failed to wait for pods responding: timed out waiting for the condition
not to have occurred
Issues about this test specifically: #27324
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 26 14:15:34.562: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-61433bdc-1v38:
container "runtime": expected RSS memory (MB) < 314572800; got 522805248
node gke-jenkins-e2e-default-pool-61433bdc-eidx:
container "runtime": expected RSS memory (MB) < 314572800; got 535351296
node gke-jenkins-e2e-default-pool-61433bdc-fbx5:
container "runtime": expected RSS memory (MB) < 314572800; got 523202560
Issues about this test specifically: #26982 #32214
Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Expected error:
<*errors.errorString | 0xc8200af060>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
Issues about this test specifically: #26490
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 35 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 26 17:35:47.184: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-61433bdc-eidx:
container "runtime": expected RSS memory (MB) < 157286400; got 244535296
node gke-jenkins-e2e-default-pool-61433bdc-fbx5:
container "runtime": expected RSS memory (MB) < 157286400; got 248078336
node gke-jenkins-e2e-default-pool-61433bdc-e66m:
container "runtime": expected RSS memory (MB) < 157286400; got 219238400
Issues about this test specifically: #28220 #32942
Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/downwardapi_volume.go:134
Timed out after 45.001s.
Expected
<string>: content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
content of file "/etc/annotations": builder="bar"
kubernetes.io/config.seen="2016-09-27T00:38:01.938739112Z"
kubernetes.io/config.source="api"
to contain substring
<string>: builder="foo"
Issues about this test specifically: #28462
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 0 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:277
Sep 26 18:00:53.522: Memory usage exceeding limits:
node gke-jenkins-e2e-default-pool-61433bdc-eidx:
container "runtime": expected RSS memory (MB) < 89128960; got 100057088
node gke-jenkins-e2e-default-pool-61433bdc-fbx5:
container "runtime": expected RSS memory (MB) < 89128960; got 91066368
Issues about this test specifically: #26784 #28384 #31935 #33023
[FLAKE-PING] @rmmh
This flaky-test issue would love to have more attention.
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/17/
Run so broken it didn't make JUnit output!
[FLAKE-PING] @rmmh
This flaky-test issue would love to have more attention.
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/25/
Run so broken it didn't make JUnit output!
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/22/
Run so broken it didn't make JUnit output!
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/28/
Multiple broken tests:
Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc821b9acb0>: {
s: "Namespace e2e-tests-configmap-333sd is active",
}
Namespace e2e-tests-configmap-333sd is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #27115 #28070 #30747 #31341
Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc82146cc80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-sched-pred-x3kxf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-x3kxf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-sched-pred-x3kxf/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #28071
Failed: [k8s.io] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821a6fe00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-downward-api-nvkql/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-nvkql/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-downward-api-nvkql/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:882
Expected error:
<*errors.StatusError | 0xc820e39880>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-services-6wh0o/services/nodeport-collision-1\\\"\") has prevented the request from succeeding (delete services nodeport-collision-1)",
Reason: "InternalError",
Details: {
Name: "nodeport-collision-1",
Group: "",
Kind: "services",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-6wh0o/services/nodeport-collision-1\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-6wh0o/services/nodeport-collision-1\"") has prevented the request from succeeding (delete services nodeport-collision-1)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:877
Issues about this test specifically: #31575 #32756
Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:00:10.262: Couldn't delete ns: "e2e-tests-downward-api-8vyul": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-downward-api-8vyul/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-downward-api-8vyul/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc821afc960), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc82170fd00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-2r3c9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-2r3c9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-2r3c9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #27195
Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc8228fbe00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-containers-b48m4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-containers-b48m4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-containers-b48m4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc82272c500>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-proxy-3uiuk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-3uiuk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-3uiuk/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #32089
Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:105
Expected error:
<*errors.errorString | 0xc820a627a0>: {
s: "failed to get logs from pod-7dc54b94-8751-11e6-a3d5-0242ac11000b for test-container: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-emptydir-u2n75/pods/pod-7dc54b94-8751-11e6-a3d5-0242ac11000b/log?container=test-container&previous=false\\\"\") has prevented the request from succeeding (get pods pod-7dc54b94-8751-11e6-a3d5-0242ac11000b)",
}
failed to get logs from pod-7dc54b94-8751-11e6-a3d5-0242ac11000b for test-container: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-u2n75/pods/pod-7dc54b94-8751-11e6-a3d5-0242ac11000b/log?container=test-container&previous=false\"") has prevented the request from succeeding (get pods pod-7dc54b94-8751-11e6-a3d5-0242ac11000b)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283
Issues about this test specifically: #26780
Failed: [k8s.io] Proxy version v1 should proxy logs on node with explicit kubelet port [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:06:07.415: Couldn't delete ns: "e2e-tests-proxy-tcs90": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-proxy-tcs90/configmaps\"") has prevented the request from succeeding (get configmaps) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-proxy-tcs90/configmaps\\\"\") has prevented the request from succeeding (get configmaps)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820bb0c30), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #32936
Failed: [k8s.io] V1Job should run a job to completion when tasks succeed {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 13:58:30.952: Couldn't delete ns: "e2e-tests-v1job-hr81j": unable to retrieve the complete list of server APIs: storage.k8s.io/v1beta1: an error on the server ("Internal Server Error: \"/apis/storage.k8s.io/v1beta1\"") has prevented the request from succeeding (&discovery.ErrGroupDiscoveryFailed{Groups:map[unversioned.GroupVersion]error{unversioned.GroupVersion{Group:"storage.k8s.io", Version:"v1beta1"}:(*errors.StatusError)(0xc821be0e80)}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Failed: [k8s.io] Generated release_1_3 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:06:54.384: Couldn't delete ns: "e2e-tests-clientset-wpzl9": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-clientset-wpzl9/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-clientset-wpzl9/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8218862d0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #28415
Failed: [k8s.io] Etcd failure [Disruptive] should recover from network partition with master {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821996580>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-etcd-failure-32fkb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-etcd-failure-32fkb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-etcd-failure-32fkb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #29512
Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821056680>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-v1job-pvuks/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-pvuks/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-v1job-pvuks/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:286
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://146.148.50.169 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-ma5s7] [] 0xc8216905c0 Error from server: an error on the server (\"Internal Server Error: \\\"/apis/batch/v2alpha1\\\"\") has prevented the request from succeeding\n [] <nil> 0xc821690ca0 exit status 1 <nil> true [0xc8218dc4a0 0xc8218dc4d0 0xc8218dc4e8] [0xc8218dc4a0 0xc8218dc4d0 0xc8218dc4e8] [0xc8218dc4a8 0xc8218dc4c0 0xc8218dc4e0] [0xaf0b90 0xaf0cf0 0xaf0cf0] 0xc82143d680}:\nCommand stdout:\n\nstderr:\nError from server: an error on the server (\"Internal Server Error: \\\"/apis/batch/v2alpha1\\\"\") has prevented the request from succeeding\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://146.148.50.169 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-ma5s7] [] 0xc8216905c0 Error from server: an error on the server ("Internal Server Error: \"/apis/batch/v2alpha1\"") has prevented the request from succeeding
[] <nil> 0xc821690ca0 exit status 1 <nil> true [0xc8218dc4a0 0xc8218dc4d0 0xc8218dc4e8] [0xc8218dc4a0 0xc8218dc4d0 0xc8218dc4e8] [0xc8218dc4a8 0xc8218dc4c0 0xc8218dc4e0] [0xaf0b90 0xaf0cf0 0xaf0cf0] 0xc82143d680}:
Command stdout:
stderr:
Error from server: an error on the server ("Internal Server Error: \"/apis/batch/v2alpha1\"") has prevented the request from succeeding
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183
Issues about this test specifically: #28426 #32168 #33756
Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:06:35.607: Couldn't delete ns: "e2e-tests-emptydir-530cl": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-emptydir-530cl/deployments\"") has prevented the request from succeeding (get deployments.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-emptydir-530cl/deployments\\\"\") has prevented the request from succeeding (get deployments.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82140e000), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #29224 #32008
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc821156960>: {
s: "Namespace e2e-tests-init-container-5ht6x is active",
}
Namespace e2e-tests-init-container-5ht6x is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #30078 #30142
Failed: [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821400080>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-container-probe-1pq5b/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-container-probe-1pq5b/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-container-probe-1pq5b/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #28084
Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:112
Expected error:
<*errors.StatusError | 0xc8227dec80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-job-xn6f1/jobs\\\"\") has prevented the request from succeeding (post jobs.extensions)",
Reason: "InternalError",
Details: {
Name: "",
Group: "extensions",
Kind: "jobs",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-job-xn6f1/jobs\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-job-xn6f1/jobs\"") has prevented the request from succeeding (post jobs.extensions)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:102
Issues about this test specifically: #28006 #28866 #29613
Failed: [k8s.io] EmptyDir wrapper volumes should becomes running {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:180
Sep 30 14:05:05.916: unable to delete git server pod git-server-8e356e43-8751-11e6-a3d5-0242ac11000b: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-wrapper-smopd/pods/git-server-8e356e43-8751-11e6-a3d5-0242ac11000b\"") has prevented the request from succeeding (delete pods git-server-8e356e43-8751-11e6-a3d5-0242ac11000b)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:167
Issues about this test specifically: #28450
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:544
Expected error:
<*errors.errorString | 0xc820be6b70>: {
s: "failed to wait for pods responding: timed out waiting for the condition",
}
failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:339
Issues about this test specifically: #27324
Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:198
Error creating Pod
Expected error:
<*errors.StatusError | 0xc821949900>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-downward-api-q203h/pods\\\"\") has prevented the request from succeeding (post pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-q203h/pods\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-q203h/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50
Failed: [k8s.io] Secrets should be consumable from pods in volume with Mode set in the item [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821b6c400>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-secrets-jw6h9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-secrets-jw6h9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-secrets-jw6h9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #31969
Failed: [k8s.io] InitContainer should not start app containers and fail the pod if init containers fail on a RestartNever pod {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 13:58:45.306: Couldn't delete ns: "e2e-tests-init-container-5ht6x": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-init-container-5ht6x\"") has prevented the request from succeeding (delete namespaces e2e-tests-init-container-5ht6x) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-init-container-5ht6x\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-init-container-5ht6x)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820fc1950), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #32054
Failed: [k8s.io] Downward API volume should provide container's memory limit {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:18:47.983: Couldn't delete ns: "e2e-tests-downward-api-ta23e": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-downward-api-ta23e/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-downward-api-ta23e/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820b182d0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc82146c180>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-proxy-s97fi/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-s97fi/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-proxy-s97fi/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:360
Failed to create host0Pod: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pod-disks-dhp79/pods\"") has prevented the request from succeeding (post pods)
Expected error:
<*errors.StatusError | 0xc82140c680>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pod-disks-dhp79/pods\\\"\") has prevented the request from succeeding (post pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-pod-disks-dhp79/pods\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pod-disks-dhp79/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:334
Issues about this test specifically: #28010 #28427
Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821400200>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-restart-g7bxo/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-restart-g7bxo/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-restart-g7bxo/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #26744 #26929
Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:233
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://146.148.50.169 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-5j4ft] [] 0xc82236c9a0 error: error when stopping \"STDIN\": error getting replication controllers: error getting replication controllers: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-5j4ft/replicationcontrollers\\\"\") has prevented the request from succeeding (get replicationControllers)\n [] <nil> 0xc82236d280 exit status 1 <nil> true [0xc820120820 0xc8201208a8 0xc8201208c0] [0xc820120820 0xc8201208a8 0xc8201208c0] [0xc820120838 0xc820120890 0xc8201208b8] [0xaf0b90 0xaf0cf0 0xaf0cf0] 0xc821292360}:\nCommand stdout:\n\nstderr:\nerror: error when stopping \"STDIN\": error getting replication controllers: error getting replication controllers: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-5j4ft/replicationcontrollers\\\"\") has prevented the request from succeeding (get replicationControllers)\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://146.148.50.169 --kubeconfig=/workspace/.kube/config delete --grace-period=0 -f - --namespace=e2e-tests-kubectl-5j4ft] [] 0xc82236c9a0 error: error when stopping "STDIN": error getting replication controllers: error getting replication controllers: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-5j4ft/replicationcontrollers\"") has prevented the request from succeeding (get replicationControllers)
[] <nil> 0xc82236d280 exit status 1 <nil> true [0xc820120820 0xc8201208a8 0xc8201208c0] [0xc820120820 0xc8201208a8 0xc8201208c0] [0xc820120838 0xc820120890 0xc8201208b8] [0xaf0b90 0xaf0cf0 0xaf0cf0] 0xc821292360}:
Command stdout:
stderr:
error: error when stopping "STDIN": error getting replication controllers: error getting replication controllers: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-5j4ft/replicationcontrollers\"") has prevented the request from succeeding (get replicationControllers)
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183
Issues about this test specifically: #28437 #29084 #29256 #29397
Failed: [k8s.io] kubelet [k8s.io] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:163
Expected error:
<*errors.StatusError | 0xc8212b0b80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/nodes/gke-jenkins-e2e-default-pool-61e0c845-gvli\\\"\") has prevented the request from succeeding (put nodes gke-jenkins-e2e-default-pool-61e0c845-gvli)",
Reason: "InternalError",
Details: {
Name: "gke-jenkins-e2e-default-pool-61e0c845-gvli",
Group: "",
Kind: "nodes",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/nodes/gke-jenkins-e2e-default-pool-61e0c845-gvli\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/nodes/gke-jenkins-e2e-default-pool-61e0c845-gvli\"") has prevented the request from succeeding (put nodes gke-jenkins-e2e-default-pool-61e0c845-gvli)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet.go:125
Issues about this test specifically: #28106
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821400080>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-resize-nodes-7bby7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-resize-nodes-7bby7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-resize-nodes-7bby7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #27233
Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821c0a100>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-5q8za/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-5q8za/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-5q8za/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Failed: [k8s.io] ScheduledJob should not schedule new jobs when ForbidConcurrent [Slow] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduledjob.go:50
Sep 30 14:04:57.212: Unexpected error getting {batch v2alpha1 scheduledjobs}: an error on the server ("Internal Server Error: \"/apis/batch/v2alpha1/namespaces/e2e-tests-scheduledjob-fezxm/scheduledjobs\"") has prevented the request from succeeding (get scheduledjobs.batch)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:449
Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:06:42.950: Couldn't delete ns: "e2e-tests-emptydir-mcei2": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-emptydir-mcei2/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-emptydir-mcei2/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8218c2000), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:65
Expected error:
<*errors.StatusError | 0xc8220a5b00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-deployment-dfrla/pods?labelSelector=name%3Dsample-pod-3\\\"\") has prevented the request from succeeding (get pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-deployment-dfrla/pods?labelSelector=name%3Dsample-pod-3\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-deployment-dfrla/pods?labelSelector=name%3Dsample-pod-3\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:420
Issues about this test specifically: #29197
Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:289
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://146.148.50.169 --kubeconfig=/workspace/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-w46ia] [] <nil> an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-w46ia/services?labelSelector=name%3Dnginx\\\"\") has prevented the request from succeeding (get services)\n [] <nil> 0xc820a2b240 exit status 1 <nil> true [0xc820037900 0xc820037918 0xc820037930] [0xc820037900 0xc820037918 0xc820037930] [0xc820037910 0xc820037928] [0xaf0cf0 0xaf0cf0] 0xc820acc000}:\nCommand stdout:\n\nstderr:\nan error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-w46ia/services?labelSelector=name%3Dnginx\\\"\") has prevented the request from succeeding (get services)\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://146.148.50.169 --kubeconfig=/workspace/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-w46ia] [] <nil> an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-w46ia/services?labelSelector=name%3Dnginx\"") has prevented the request from succeeding (get services)
[] <nil> 0xc820a2b240 exit status 1 <nil> true [0xc820037900 0xc820037918 0xc820037930] [0xc820037900 0xc820037918 0xc820037930] [0xc820037910 0xc820037928] [0xaf0cf0 0xaf0cf0] 0xc820acc000}:
Command stdout:
stderr:
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-w46ia/services?labelSelector=name%3Dnginx\"") has prevented the request from succeeding (get services)
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183
Issues about this test specifically: #27156 #28979 #30489 #33649
Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 13:58:36.401: Couldn't delete ns: "e2e-tests-kubectl-6nw2k": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-6nw2k/ingresses\"") has prevented the request from succeeding (get ingresses.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-6nw2k/ingresses\\\"\") has prevented the request from succeeding (get ingresses.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc821a1fa40), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:420
Expected error:
<*exec.ExitError | 0xc821918840>: {
ProcessState: {
pid: 7498,
status: 256,
rusage: {
Utime: {Sec: 0, Usec: 96000},
Stime: {Sec: 0, Usec: 36000},
Maxrss: 39200,
Ixrss: 0,
Idrss: 0,
Isrss: 0,
Minflt: 2061,
Majflt: 0,
Nswap: 0,
Inblock: 0,
Oublock: 0,
Msgsnd: 0,
Msgrcv: 0,
Nsignals: 0,
Nvcsw: 1819,
Nivcsw: 28,
},
},
Stderr: nil,
}
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:405
Issues about this test specifically: #26127 #28081
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc8211bd8f0>: {
s: "Namespace e2e-tests-configmap-333sd is active",
}
Namespace e2e-tests-configmap-333sd is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc821129a00>: {
s: "Namespace e2e-tests-configmap-333sd is active",
}
Namespace e2e-tests-configmap-333sd is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #28091
Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:08:23.914: Couldn't delete ns: "e2e-tests-container-probe-2f9hr": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-container-probe-2f9hr/jobs\"") has prevented the request from succeeding (get jobs.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-container-probe-2f9hr/jobs\\\"\") has prevented the request from succeeding (get jobs.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820a7c410), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #30264
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821be1580>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-hlka2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-hlka2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-horizontal-pod-autoscaling-hlka2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #27479 #27675 #28097 #32950
Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821aa6a00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-zwfnb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-zwfnb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-zwfnb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #29831
Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:139
Failed to create host0Pod: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pod-disks-b73hy/pods\"") has prevented the request from succeeding (post pods)
Expected error:
<*errors.StatusError | 0xc822752800>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pod-disks-b73hy/pods\\\"\") has prevented the request from succeeding (post pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-pod-disks-b73hy/pods\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pod-disks-b73hy/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:99
Issues about this test specifically: #28984 #33827
Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:268
Expected error:
<*errors.StatusError | 0xc822a0eb80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-resourcequota-ple7o/resourcequotas\\\"\") has prevented the request from succeeding (post resourceQuotas)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "resourceQuotas",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-ple7o/resourcequotas\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-ple7o/resourcequotas\"") has prevented the request from succeeding (post resourceQuotas)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:240
Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 13:58:43.391: Couldn't delete ns: "e2e-tests-pods-tagku": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-pods-tagku/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-pods-tagku/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc821143c20), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #33008
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:01:42.615: Couldn't delete ns: "e2e-tests-kubectl-8fakt": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-8fakt/ingresses\"") has prevented the request from succeeding (get ingresses.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-8fakt/ingresses\\\"\") has prevented the request from succeeding (get ingresses.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82190a230), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #26209 #29227 #32132
Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:162
Expected error:
<*errors.StatusError | 0xc821324300>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-job-h8ahc/pods?labelSelector=job%3Dscale-down\\\"\") has prevented the request from succeeding (get pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-job-h8ahc/pods?labelSelector=job%3Dscale-down\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-job-h8ahc/pods?labelSelector=job%3Dscale-down\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:149
Issues about this test specifically: #29066 #30592 #31065 #33171
Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:19:32.059: Couldn't delete ns: "e2e-tests-metrics-grabber-vzype": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-metrics-grabber-vzype\"") has prevented the request from succeeding (delete namespaces e2e-tests-metrics-grabber-vzype) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-metrics-grabber-vzype\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-metrics-grabber-vzype)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc821a784b0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #29513
Failed: [k8s.io] ConfigMap should be consumable via environment variable [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:209
Error creating Pod
Expected error:
<*errors.StatusError | 0xc82264cf00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-configmap-94fs9/pods\\\"\") has prevented the request from succeeding (post pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-94fs9/pods\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-94fs9/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50
Issues about this test specifically: #27079
Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:08:42.938: All nodes should be ready after test, an error on the server ("Internal Server Error: \"/api/v1/nodes\"") has prevented the request from succeeding (get nodes)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:418
Issues about this test specifically: #28337
Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:307
Failed to create host0ROPod
Expected error:
<*errors.StatusError | 0xc82170ee00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server has asked for the client to provide credentials (post pods)",
Reason: "Unauthorized",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Unauthorized",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 401,
},
}
the server has asked for the client to provide credentials (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:288
Issues about this test specifically: #28297
Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:04:35.306: Couldn't delete ns: "e2e-tests-nettest-uld8j": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-nettest-uld8j/daemonsets\"") has prevented the request from succeeding (get daemonsets.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-nettest-uld8j/daemonsets\\\"\") has prevented the request from succeeding (get daemonsets.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820fdf180), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #26171 #28188
Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:243
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://146.148.50.169 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-6xd0r] [] 0xc8205e98a0 Error from server: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-6xd0r/replicationcontrollers/update-demo-nautilus\\\"\") has prevented the request from succeeding (get replicationControllers update-demo-nautilus)\n [] <nil> 0xc820ad4180 exit status 1 <nil> true [0xc820b72c28 0xc820b72c50 0xc820b72c68] [0xc820b72c28 0xc820b72c50 0xc820b72c68] [0xc820b72c30 0xc820b72c48 0xc820b72c58] [0xaf0b90 0xaf0cf0 0xaf0cf0] 0xc820b88060}:\nCommand stdout:\n\nstderr:\nError from server: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-6xd0r/replicationcontrollers/update-demo-nautilus\\\"\") has prevented the request from succeeding (get replicationControllers update-demo-nautilus)\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://146.148.50.169 --kubeconfig=/workspace/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-6xd0r] [] 0xc8205e98a0 Error from server: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-6xd0r/replicationcontrollers/update-demo-nautilus\"") has prevented the request from succeeding (get replicationControllers update-demo-nautilus)
[] <nil> 0xc820ad4180 exit status 1 <nil> true [0xc820b72c28 0xc820b72c50 0xc820b72c68] [0xc820b72c28 0xc820b72c50 0xc820b72c68] [0xc820b72c30 0xc820b72c48 0xc820b72c58] [0xaf0b90 0xaf0cf0 0xaf0cf0] 0xc820b88060}:
Command stdout:
stderr:
Error from server: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-6xd0r/replicationcontrollers/update-demo-nautilus\"") has prevented the request from succeeding (get replicationControllers update-demo-nautilus)
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183
Issues about this test specifically: #26425 #26715 #28825 #28880 #32854
Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:62
Expected error:
<*errors.errorString | 0xc821826fb0>: {
s: "failed to get logs from downward-api-27f2b5fb-8752-11e6-a3d5-0242ac11000b for dapi-container: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-downward-api-zrqj4/pods/downward-api-27f2b5fb-8752-11e6-a3d5-0242ac11000b/log?container=dapi-container&previous=false\\\"\") has prevented the request from succeeding (get pods downward-api-27f2b5fb-8752-11e6-a3d5-0242ac11000b)",
}
failed to get logs from downward-api-27f2b5fb-8752-11e6-a3d5-0242ac11000b for dapi-container: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-zrqj4/pods/downward-api-27f2b5fb-8752-11e6-a3d5-0242ac11000b/log?container=dapi-container&previous=false\"") has prevented the request from succeeding (get pods downward-api-27f2b5fb-8752-11e6-a3d5-0242ac11000b)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283
Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc82275db00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-replication-controller-b1evu/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-b1evu/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-b1evu/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #32087
Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:216
Expected error:
<*errors.StatusError | 0xc8211d7580>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-nsdeletetest-b9vxe\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-nsdeletetest-b9vxe)",
Reason: "InternalError",
Details: {
Name: "e2e-tests-nsdeletetest-b9vxe",
Group: "",
Kind: "namespaces",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-nsdeletetest-b9vxe\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-nsdeletetest-b9vxe\"") has prevented the request from succeeding (delete namespaces e2e-tests-nsdeletetest-b9vxe)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:113
Failed: [k8s.io] EmptyDir volumes should support (root,0644,tmpfs) [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821400200>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-emptydir-m4ado/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-m4ado/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-emptydir-m4ado/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821949080>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-6pvwi/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-6pvwi/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-6pvwi/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #26138 #28429 #28737
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Sep 30 10:08:46.875: CPU usage exceeding limits:
node gke-jenkins-e2e-default-pool-61e0c845-gvli:
container "kubelet": expected 95th% usage < 0.220; got 0.236
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:187
Issues about this test specifically: #26982 #32214
Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:56
Expected error:
<*errors.errorString | 0xc8214a19e0>: {
s: "failed to get logs from pod-host-path-test for test-container-1: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-hostpath-1l23d/pods/pod-host-path-test/log?container=test-container-1&previous=false\\\"\") has prevented the request from succeeding (get pods pod-host-path-test)",
}
failed to get logs from pod-host-path-test for test-container-1: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-hostpath-1l23d/pods/pod-host-path-test/log?container=test-container-1&previous=false\"") has prevented the request from succeeding (get pods pod-host-path-test)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283
Issues about this test specifically: #32122
Failed: [k8s.io] Downward API volume should provide container's memory request {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:02:36.795: Couldn't delete ns: "e2e-tests-downward-api-svtx6": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-downward-api-svtx6/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-downward-api-svtx6/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82092c000), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #29707
Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc82272d900>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-e2e-privilegedpod-ne9j7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-e2e-privilegedpod-ne9j7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-e2e-privilegedpod-ne9j7/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #29519 #32451
Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed creating the first deployment
Expected error:
<*errors.StatusError | 0xc820ffb000>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-m75kv/deployments\\\"\") has prevented the request from succeeding (post deployments.extensions)",
Reason: "InternalError",
Details: {
Name: "",
Group: "extensions",
Kind: "deployments",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-m75kv/deployments\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-deployment-m75kv/deployments\"") has prevented the request from succeeding (post deployments.extensions)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1227
Issues about this test specifically: #31502 #32947
Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:319
Expected error:
<*errors.StatusError | 0xc821862300>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "the server has asked for the client to provide credentials (get pods)",
Reason: "Unauthorized",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Unauthorized",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 401,
},
}
the server has asked for the client to provide credentials (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/daemon_restart.go:175
Issues about this test specifically: #27502 #28722 #32037
Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:195
Expected error:
<*errors.StatusError | 0xc82217b880>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-node-problem-detector-fang2/configmaps\\\"\") has prevented the request from succeeding (post configmaps)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "configmaps",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-node-problem-detector-fang2/configmaps\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-node-problem-detector-fang2/configmaps\"") has prevented the request from succeeding (post configmaps)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:148
Issues about this test specifically: #28069 #28168 #28343 #29656 #33183
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:06:44.532: Couldn't delete ns: "e2e-tests-kubectl-tkyv6": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-tkyv6\"") has prevented the request from succeeding (delete namespaces e2e-tests-kubectl-tkyv6) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-tkyv6\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-kubectl-tkyv6)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc822938b90), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #26126 #30653
Failed: [k8s.io] ScheduledJob should replace jobs when ReplaceConcurrent {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc8220f8f80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-scheduledjob-55vv0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-55vv0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-55vv0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #30542 #31460 #31479 #31552 #32032
Failed: [k8s.io] Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc8228fad00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pod-disks-eh03e/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pod-disks-eh03e/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pod-disks-eh03e/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #28283
Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc820ffb600>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-secrets-o7zd4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-secrets-o7zd4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-secrets-o7zd4/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #32025
Failed: [k8s.io] Variable Expansion should allow composing env vars into new env vars [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821a6f600>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-var-expansion-g2qz0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-var-expansion-g2qz0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-var-expansion-g2qz0/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #29461
Failed: [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:80
Sep 30 14:14:18.866: Failed: an error on the server ("Internal Server Error: \"/healthz\"") has prevented the request from succeeding
Body: Internal Server Error: "/healthz"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:77
Issues about this test specifically: #26838
Failed: [k8s.io] Generated release_1_2 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:193
Sep 30 14:19:53.134: Failed to query for pods: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-clientset-5h4qn/pods?labelSelector=time%3D89604179\"") has prevented the request from succeeding (get pods)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/generated_clientset.go:136
Issues about this test specifically: #32043
Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc822188400>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-configmap-333sd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-333sd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-333sd/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #30352
Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Error creating Pod
Expected error:
<*errors.StatusError | 0xc82264cc00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-emptydir-qt8g8/pods\\\"\") has prevented the request from succeeding (post pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-qt8g8/pods\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-emptydir-qt8g8/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50
Issues about this test specifically: #30851
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 13:58:21.484: Couldn't delete ns: "e2e-tests-resize-nodes-tn0oo": an error on the server ("Internal Server Error: \"/apis\"") has prevented the request from succeeding (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis\\\"\") has prevented the request from succeeding", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc822786eb0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #27470 #30156
Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:16:59.437: Couldn't delete ns: "e2e-tests-v1job-27l2e": the server has asked for the client to provide credentials (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"the server has asked for the client to provide credentials", Reason:"Unauthorized", Details:(*unversioned.StatusDetails)(0xc821aacff0), Code:401}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #29657
Failed: [k8s.io] ServiceAccounts should ensure a single API token exists {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:02:29.439: Couldn't delete ns: "e2e-tests-svcaccounts-8bpyr": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-svcaccounts-8bpyr/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-svcaccounts-8bpyr/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc821a1fea0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #31889
Failed: [k8s.io] Networking [k8s.io] Granular Checks should function for pod communication on a single node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:248
Expected error:
<*errors.StatusError | 0xc8216a8600>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nettest-ua702/pods?fieldSelector=metadata.name%3Dsame-node-webserver\\\"\") has prevented the request from succeeding (get pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-ua702/pods?fieldSelector=metadata.name%3Dsame-node-webserver\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nettest-ua702/pods?fieldSelector=metadata.name%3Dsame-node-webserver\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:4972
Issues about this test specifically: #28827 #31867
Failed: [k8s.io] Downward API should provide pod IP as an env var {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:83
Expected error:
<*errors.errorString | 0xc8218270e0>: {
s: "failed to get logs from downward-api-1b3b9838-8752-11e6-a3d5-0242ac11000b for dapi-container: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-downward-api-fz6mh/pods/downward-api-1b3b9838-8752-11e6-a3d5-0242ac11000b/log?container=dapi-container&previous=false\\\"\") has prevented the request from succeeding (get pods downward-api-1b3b9838-8752-11e6-a3d5-0242ac11000b)",
}
failed to get logs from downward-api-1b3b9838-8752-11e6-a3d5-0242ac11000b for dapi-container: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-fz6mh/pods/downward-api-1b3b9838-8752-11e6-a3d5-0242ac11000b/log?container=dapi-container&previous=false\"") has prevented the request from succeeding (get pods downward-api-1b3b9838-8752-11e6-a3d5-0242ac11000b)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:662
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://146.148.50.169 --kubeconfig=/workspace/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-g972d] [] <nil> Error from server: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-g972d/services/redis-master\\\"\") has prevented the request from succeeding (get services redis-master)\n [] <nil> 0xc822969760 exit status 1 <nil> true [0xc82053b1a8 0xc82053b1c0 0xc82053b1d8] [0xc82053b1a8 0xc82053b1c0 0xc82053b1d8] [0xc82053b1b8 0xc82053b1d0] [0xaf0cf0 0xaf0cf0] 0xc82278af60}:\nCommand stdout:\n\nstderr:\nError from server: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-g972d/services/redis-master\\\"\") has prevented the request from succeeding (get services redis-master)\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://146.148.50.169 --kubeconfig=/workspace/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-g972d] [] <nil> Error from server: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-g972d/services/redis-master\"") has prevented the request from succeeding (get services redis-master)
[] <nil> 0xc822969760 exit status 1 <nil> true [0xc82053b1a8 0xc82053b1c0 0xc82053b1d8] [0xc82053b1a8 0xc82053b1c0 0xc82053b1d8] [0xc82053b1b8 0xc82053b1d0] [0xaf0cf0 0xaf0cf0] 0xc82278af60}:
Command stdout:
stderr:
Error from server: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-g972d/services/redis-master\"") has prevented the request from succeeding (get services redis-master)
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183
Issues about this test specifically: #28774 #31429
Failed: [k8s.io] Downward API volume should provide container's cpu limit {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:09:30.830: Couldn't delete ns: "e2e-tests-downward-api-duic8": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-duic8\"") has prevented the request from succeeding (delete namespaces e2e-tests-downward-api-duic8) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-downward-api-duic8\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-downward-api-duic8)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820fc11d0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:319
Expected error:
<*errors.StatusError | 0xc821be1980>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-pods-x5cpb/pods?fieldSelector=metadata.name%3Dpod-update-d9350e0f-8752-11e6-a3d5-0242ac11000b\\\"\") has prevented the request from succeeding (get pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-x5cpb/pods?fieldSelector=metadata.name%3Dpod-update-d9350e0f-8752-11e6-a3d5-0242ac11000b\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-pods-x5cpb/pods?fieldSelector=metadata.name%3Dpod-update-d9350e0f-8752-11e6-a3d5-0242ac11000b\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:57
Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:484
Expected error:
<*errors.StatusError | 0xc82272d680>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-resourcequota-cyu00/resourcequotas\\\"\") has prevented the request from succeeding (post resourceQuotas)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "resourceQuotas",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-cyu00/resourcequotas\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-cyu00/resourcequotas\"") has prevented the request from succeeding (post resourceQuotas)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:415
Issues about this test specifically: #31635
Failed: [k8s.io] ResourceQuota should verify ResourceQuota with terminating scopes. {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:410
Expected error:
<*errors.StatusError | 0xc82260db80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-resourcequota-d1li2/pods/test-pod\\\"\") has prevented the request from succeeding (delete pods test-pod)",
Reason: "InternalError",
Details: {
Name: "test-pod",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-d1li2/pods/test-pod\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-d1li2/pods/test-pod\"") has prevented the request from succeeding (delete pods test-pod)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:361
Issues about this test specifically: #31158
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota without scopes {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc82124a780>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-18f31/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-18f31/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-18f31/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:58
Error creating Pod
Expected error:
<*errors.StatusError | 0xc822752380>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-downward-api-pps2y/pods\\\"\") has prevented the request from succeeding (post pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-pps2y/pods\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-downward-api-pps2y/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50
Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:40
Expected error:
<*errors.StatusError | 0xc82146d100>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-replicaset-uaol8/pods?labelSelector=name%3Dmy-hostname-basic-53016028-8752-11e6-a3d5-0242ac11000b\\\"\") has prevented the request from succeeding (get pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-replicaset-uaol8/pods?labelSelector=name%3Dmy-hostname-basic-53016028-8752-11e6-a3d5-0242ac11000b\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-replicaset-uaol8/pods?labelSelector=name%3Dmy-hostname-basic-53016028-8752-11e6-a3d5-0242ac11000b\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:98
Issues about this test specifically: #30981
Failed: [k8s.io] Deployment paused deployment should be able to scale {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821056700>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-deployment-td716/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-td716/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-deployment-td716/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #29828
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821465e00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-0k28x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-0k28x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-0k28x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #31066 #31967 #32219 #32535
Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:21:02.976: Couldn't delete ns: "e2e-tests-init-container-0hmqp": an error on the server ("Internal Server Error: \"/apis\"") has prevented the request from succeeding (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis\\\"\") has prevented the request from succeeding", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8221e13b0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #31408
Failed: [k8s.io] Variable Expansion should allow substituting values in a container's command [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:06:56.622: Couldn't delete ns: "e2e-tests-var-expansion-ajr7o": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-var-expansion-ajr7o\"") has prevented the request from succeeding (delete namespaces e2e-tests-var-expansion-ajr7o) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-var-expansion-ajr7o\\\"\") has prevented the request from succeeding (delete namespaces e2e-tests-var-expansion-ajr7o)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82140e0a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Failed: [k8s.io] Services should use same NodePort with same port but different protocols {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821997600>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-services-p3x9x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-p3x9x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-services-p3x9x/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Failed: [k8s.io] MetricsGrabber should grab all metrics from a ControllerManager. {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc8216a9900>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-qeen9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-qeen9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-metrics-grabber-qeen9/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:06:27.975: All nodes should be ready after test, the server has asked for the client to provide credentials (get nodes)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:418
Issues about this test specifically: #26129 #32341
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
<*errors.errorString | 0xc82090c4b0>: {
s: "Error creating replication controller: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-qjxsv/replicationcontrollers\\\"\") has prevented the request from succeeding (post replicationControllers)",
}
Error creating replication controller: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-qjxsv/replicationcontrollers\"") has prevented the request from succeeding (post replicationControllers)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:345
Issues about this test specifically: #28657 #30519
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc820f01660>: {
s: "Namespace e2e-tests-init-container-5ht6x is active",
}
Namespace e2e-tests-init-container-5ht6x is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #29516
Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:287
Error creating Pod
Expected error:
<*errors.StatusError | 0xc8212b1680>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-configmap-lpjlh/pods\\\"\") has prevented the request from succeeding (post pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-lpjlh/pods\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-lpjlh/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50
Issues about this test specifically: #29751 #30430
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc821056780>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-kubectl-gvkk2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-gvkk2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-kubectl-gvkk2/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #28493 #29964
Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:198
Expected error:
<*errors.StatusError | 0xc8227df200>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-resourcequota-m96uf/resourcequotas\\\"\") has prevented the request from succeeding (post resourceQuotas)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "resourceQuotas",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-m96uf/resourcequotas\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-resourcequota-m96uf/resourcequotas\"") has prevented the request from succeeding (post resourceQuotas)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resource_quota.go:140
Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Expected error:
<*errors.StatusError | 0xc8220f9400>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-fli37/services\\\"\") has prevented the request from succeeding (post services)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "services",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-fli37/services\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-fli37/services\"") has prevented the request from succeeding (post services)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:328
Issues about this test specifically: #27443 #27835 #28900 #32512
Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc820f80180>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-scheduledjob-e03bs/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-e03bs/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-e03bs/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #31657
Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:402
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://146.148.50.169 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-3eoou run -i --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure failure-2 -- /bin/sh -c cat && exit 42] [] 0xc821628060 Error from server: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-3eoou/pods?labelSelector=controller-uid%3D1e798f05-8751-11e6-93c4-42010af00045\\\"\") has prevented the request from succeeding (get pods)\n [] <nil> 0xc821628680 exit status 1 <nil> true [0xc820f58740 0xc820f58768 0xc820f58778] [0xc820f58740 0xc820f58768 0xc820f58778] [0xc820f58748 0xc820f58760 0xc820f58770] [0xaf0b90 0xaf0cf0 0xaf0cf0] 0xc821bea060}:\nCommand stdout:\n\nstderr:\nError from server: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-3eoou/pods?labelSelector=controller-uid%3D1e798f05-8751-11e6-93c4-42010af00045\\\"\") has prevented the request from succeeding (get pods)\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://146.148.50.169 --kubeconfig=/workspace/.kube/config --namespace=e2e-tests-kubectl-3eoou run -i --image=gcr.io/google_containers/busybox:1.24 --restart=OnFailure failure-2 -- /bin/sh -c cat && exit 42] [] 0xc821628060 Error from server: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-3eoou/pods?labelSelector=controller-uid%3D1e798f05-8751-11e6-93c4-42010af00045\"") has prevented the request from succeeding (get pods)
[] <nil> 0xc821628680 exit status 1 <nil> true [0xc820f58740 0xc820f58768 0xc820f58778] [0xc820f58740 0xc820f58768 0xc820f58778] [0xc820f58748 0xc820f58760 0xc820f58770] [0xaf0b90 0xaf0cf0 0xaf0cf0] 0xc821bea060}:
Command stdout:
stderr:
Error from server: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-3eoou/pods?labelSelector=controller-uid%3D1e798f05-8751-11e6-93c4-42010af00045\"") has prevented the request from succeeding (get pods)
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:661
Issues about this test specifically: #31151
Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:04:51.475: Couldn't delete ns: "e2e-tests-configmap-7f1q6": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-7f1q6/replicationcontrollers\"") has prevented the request from succeeding (get replicationcontrollers) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-configmap-7f1q6/replicationcontrollers\\\"\") has prevented the request from succeeding (get replicationcontrollers)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc8212f05a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #32949
Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 14:11:22.432: Couldn't delete ns: "e2e-tests-var-expansion-z0b8u": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/e2e-tests-var-expansion-z0b8u/horizontalpodautoscalers\"") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/e2e-tests-var-expansion-z0b8u/horizontalpodautoscalers\\\"\") has prevented the request from succeeding (get horizontalpodautoscalers.autoscaling)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82140f900), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #28503
Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc8220c4580>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-replication-controller-t7vh5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-t7vh5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-replication-controller-t7vh5/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #26870
Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:100
error watching a pod
Expected error:
<*errors.StatusError | 0xc822a0ef00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-init-container-rncvj/pods?fieldSelector=metadata.name%3Dpod-init-dd231ec6-8752-11e6-a3d5-0242ac11000b&resourceVersion=45040\\\"\") has prevented the request from succeeding (get pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-init-container-rncvj/pods?fieldSelector=metadata.name%3Dpod-init-dd231ec6-8752-11e6-a3d5-0242ac11000b&resourceVersion=45040\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-init-container-rncvj/pods?fieldSelector=metadata.name%3Dpod-init-dd231ec6-8752-11e6-a3d5-0242ac11000b&resourceVersion=45040\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:82
Issues about this test specifically: #31936
Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
<*errors.StatusError | 0xc8228fa100>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-dns-01idl/pods?fieldSelector=metadata.name%3Ddns-test-127b4836-8753-11e6-a3d5-0242ac11000b\\\"\") has prevented the request from succeeding (get pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-dns-01idl/pods?fieldSelector=metadata.name%3Ddns-test-127b4836-8753-11e6-a3d5-0242ac11000b\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-dns-01idl/pods?fieldSelector=metadata.name%3Ddns-test-127b4836-8753-11e6-a3d5-0242ac11000b\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236
Issues about this test specifically: #26194 #26338 #30345
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/31/
Run so broken it didn't make JUnit output!
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/32/
Run so broken it didn't make JUnit output!
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/33/
Multiple broken tests:
Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc820f5b860>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #27662 #29820 #31971 #32505
Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc820cb5780>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-container-probe-cxjtb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-container-probe-cxjtb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-container-probe-cxjtb/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1039
Expected error:
<*errors.errorString | 0xc820c438c0>: {
s: "kubectl delete failed output: , err: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.167.168 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-zjaz8] [] <nil> Error from server: an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-zjaz8/deployments/e2e-test-nginx-deployment\\\"\") has prevented the request from succeeding (delete deployments.extensions e2e-test-nginx-deployment)\n [] <nil> 0xc8213ff420 exit status 1 <nil> true [0xc820c66088 0xc820c660a0 0xc820c660b8] [0xc820c66088 0xc820c660a0 0xc820c660b8] [0xc820c66098 0xc820c660b0] [0xaf0cf0 0xaf0cf0] 0xc820cdb0e0}:\nCommand stdout:\n\nstderr:\nError from server: an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-zjaz8/deployments/e2e-test-nginx-deployment\\\"\") has prevented the request from succeeding (delete deployments.extensions e2e-test-nginx-deployment)\n\nerror:\nexit status 1\n",
}
kubectl delete failed output: , err: error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.167.168 --kubeconfig=/workspace/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-zjaz8] [] <nil> Error from server: an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-zjaz8/deployments/e2e-test-nginx-deployment\"") has prevented the request from succeeding (delete deployments.extensions e2e-test-nginx-deployment)
[] <nil> 0xc8213ff420 exit status 1 <nil> true [0xc820c66088 0xc820c660a0 0xc820c660b8] [0xc820c66088 0xc820c660a0 0xc820c660b8] [0xc820c66098 0xc820c660b0] [0xaf0cf0 0xaf0cf0] 0xc820cdb0e0}:
Command stdout:
stderr:
Error from server: an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-kubectl-zjaz8/deployments/e2e-test-nginx-deployment\"") has prevented the request from succeeding (delete deployments.extensions e2e-test-nginx-deployment)
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1038
Issues about this test specifically: #27532
Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc8205b0700>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-configmap-b5e2a/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-b5e2a/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-configmap-b5e2a/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #29751 #30430
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc820e938c0>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc8212fc4b0>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #30078 #30142
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc820575fe0>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #28019
Failed: [k8s.io] Services should serve multiport endpoints from pods [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:229
Expected error:
<*errors.StatusError | 0xc820a32380>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-services-06yj5/services\\\"\") has prevented the request from succeeding (post services)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "services",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-06yj5/services\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-06yj5/services\"") has prevented the request from succeeding (post services)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:185
Issues about this test specifically: #29831
Failed: [k8s.io] HostPath should support subPath [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:113
Expected error:
<*errors.errorString | 0xc820cf2b50>: {
s: "failed to get logs from pod-host-path-test for test-container-2: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-hostpath-fxrpz/pods/pod-host-path-test/log?container=test-container-2&previous=false\\\"\") has prevented the request from succeeding (get pods pod-host-path-test)",
}
failed to get logs from pod-host-path-test for test-container-2: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-hostpath-fxrpz/pods/pod-host-path-test/log?container=test-container-2&previous=false\"") has prevented the request from succeeding (get pods pod-host-path-test)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2283
Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:280
0 (0; 312.800232ms): path /api/v1/namespaces/e2e-tests-proxy-5btt5/services/https:proxy-service-7zk8z:tlsportname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-proxy-5btt5/services/https:proxy-service-7zk8z:tlsportname2/proxy/\"") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Internal Server Error: "/api/v1/namespaces/e2e-tests-proxy-5btt5/services/https:proxy-service-7zk8z:tlsportname2/proxy/" }],RetryAfterSeconds:0,} Code:500}
0 (0; 317.782858ms): path /api/v1/namespaces/e2e-tests-proxy-5btt5/services/http:proxy-service-7zk8z:portname2/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-proxy-5btt5/services/http:proxy-service-7zk8z:portname2/proxy/\"") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Internal Server Error: "/api/v1/namespaces/e2e-tests-proxy-5btt5/services/http:proxy-service-7zk8z:portname2/proxy/" }],RetryAfterSeconds:0,} Code:500}
0 (0; 321.16444ms): path /api/v1/namespaces/e2e-tests-proxy-5btt5/pods/proxy-service-7zk8z-i4319:162/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-proxy-5btt5/pods/proxy-service-7zk8z-i4319:162/proxy/\"") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Internal Server Error: "/api/v1/namespaces/e2e-tests-proxy-5btt5/pods/proxy-service-7zk8z-i4319:162/proxy/" }],RetryAfterSeconds:0,} Code:500}
0 (0; 334.153193ms): path /api/v1/namespaces/e2e-tests-proxy-5btt5/services/proxy-service-7zk8z:portname1/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-proxy-5btt5/services/proxy-service-7zk8z:portname1/proxy/\"") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Internal Server Error: "/api/v1/namespaces/e2e-tests-proxy-5btt5/services/proxy-service-7zk8z:portname1/proxy/" }],RetryAfterSeconds:0,} Code:500}
0 (0; 337.886976ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-5btt5/services/proxy-service-7zk8z:portname1/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Internal Server Error: \"/api/v1/proxy/namespaces/e2e-tests-proxy-5btt5/services/proxy-service-7zk8z:portname1/\"") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Internal Server Error: "/api/v1/proxy/namespaces/e2e-tests-proxy-5btt5/services/proxy-service-7zk8z:portname1/" }],RetryAfterSeconds:0,} Code:500}
0 (0; 416.824768ms): path /api/v1/proxy/namespaces/e2e-tests-proxy-5btt5/services/https:proxy-service-7zk8z:tlsportname2/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Internal Server Error: \"/api/v1/proxy/namespaces/e2e-tests-proxy-5btt5/services/https:proxy-service-7zk8z:tlsportname2/\"") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Internal Server Error: "/api/v1/proxy/namespaces/e2e-tests-proxy-5btt5/services/https:proxy-service-7zk8z:tlsportname2/" }],RetryAfterSeconds:0,} Code:500}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:278
Issues about this test specifically: #26164 #26210
Failed: [k8s.io] Services should check NodePort out-of-range {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:950
Expected
<string>: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-services-30e1w/services/nodeport-range-test\"") has prevented the request from succeeding (put services nodeport-range-test)
to match regular expression
<string>: 23854.*port is not in the valid range
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:935
Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc821bb3490>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:278
Sep 30 21:03:47.828: CPU usage exceeding limits:
node gke-jenkins-e2e-default-pool-74952393-leb7:
container "kubelet": expected 95th% usage < 0.220; got 0.238
node gke-jenkins-e2e-default-pool-74952393-vc2d:
container "kubelet": expected 50th% usage < 0.170; got 0.198, container "kubelet": expected 95th% usage < 0.220; got 0.238
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:187
Issues about this test specifically: #26982 #32214
Failed: [k8s.io] Pods should be submitted and removed [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:266
failed to GET scheduled pod
Expected error:
<*errors.StatusError | 0xc821155500>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-pods-dms25/pods/pod-submit-remove-3fead2a2-877c-11e6-ad93-0242ac110008\\\"\") has prevented the request from succeeding (get pods pod-submit-remove-3fead2a2-877c-11e6-ad93-0242ac110008)",
Reason: "InternalError",
Details: {
Name: "pod-submit-remove-3fead2a2-877c-11e6-ad93-0242ac110008",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-dms25/pods/pod-submit-remove-3fead2a2-877c-11e6-ad93-0242ac110008\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-pods-dms25/pods/pod-submit-remove-3fead2a2-877c-11e6-ad93-0242ac110008\"") has prevented the request from succeeding (get pods pod-submit-remove-3fead2a2-877c-11e6-ad93-0242ac110008)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:210
Issues about this test specifically: #26224
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Network when a node becomes unreachable [replication controller] recreates pods scheduled on the unreachable node AND allows scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:384
Expected error:
<*errors.StatusError | 0xc821184980>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-resize-nodes-esapl/pods\\\"\") has prevented the request from succeeding (get pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-resize-nodes-esapl/pods\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-resize-nodes-esapl/pods\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:377
Issues about this test specifically: #27324
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc820e35a10>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #29516
Failed: [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc820e6f180>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-resourcequota-ldb1p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-resourcequota-ldb1p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-resourcequota-ldb1p/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc820f85cd0>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #27115 #28070 #30747 #31341
Failed: [k8s.io] ScheduledJob should schedule multiple jobs concurrently {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc820415a80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-scheduledjob-l2s90/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-l2s90/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-scheduledjob-l2s90/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #31657
Failed: [k8s.io] KubeProxy should test kube-proxy [Slow] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:107
Sep 30 19:10:28.218: Failed to create netserver-0 pod: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-e2e-kubeproxy-yqvx3/pods\"") has prevented the request from succeeding (post pods)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubeproxy.go:551
Issues about this test specifically: #26490 #33669
Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:166
Expected error:
<kubectl.ScaleError>: {
FailureType: 0,
ResourceVersion: "Unknown",
ActualError: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-v1job-k6x4q/jobs/scale-down\\\"\") has prevented the request from succeeding (get jobs.batch scale-down)",
Reason: "InternalError",
Details: {
Name: "scale-down",
Group: "batch",
Kind: "jobs",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-v1job-k6x4q/jobs/scale-down\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
},
}
Scaling the resource failed with: an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-v1job-k6x4q/jobs/scale-down\"") has prevented the request from succeeding (get jobs.batch scale-down); Current resource version Unknown
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:161
Issues about this test specifically: #30216 #31031 #32086
Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends data, and disconnects [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:287
Sep 30 19:10:00.914: Failed to read from kubectl port-forward stdout: EOF
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/portforward.go:154
Issues about this test specifically: #27680
Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:54
Error creating Pod
Expected error:
<*errors.StatusError | 0xc82119be00>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-configmap-w2jje/pods\\\"\") has prevented the request from succeeding (post pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-w2jje/pods\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-configmap-w2jje/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:50
Issues about this test specifically: #32949
Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc821422e20>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:55
Expected error:
<*errors.StatusError | 0xc820186780>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-nettest-rv0jb/pods\\\"\") has prevented the request from succeeding (post pods)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "pods",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/namespaces/e2e-tests-nettest-rv0jb/pods\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-nettest-rv0jb/pods\"") has prevented the request from succeeding (post pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:54
Issues about this test specifically: #26171 #28188
Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 19:07:52.566: Couldn't delete ns: "e2e-tests-secrets-89qxc": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-secrets-89qxc/daemonsets\"") has prevented the request from succeeding (get daemonsets.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-secrets-89qxc/daemonsets\\\"\") has prevented the request from succeeding (get daemonsets.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82097b2c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc820d31fd0>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 19:08:55.791: Couldn't delete ns: "e2e-tests-services-c3kr5": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/e2e-tests-services-c3kr5/jobs\"") has prevented the request from succeeding (get jobs.batch) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/e2e-tests-services-c3kr5/jobs\\\"\") has prevented the request from succeeding (get jobs.batch)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820aca370), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 19:09:47.650: Couldn't delete ns: "e2e-tests-configmap-ip09q": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-configmap-ip09q/deployments\"") has prevented the request from succeeding (get deployments.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-configmap-ip09q/deployments\\\"\") has prevented the request from succeeding (get deployments.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc820953180), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Issues about this test specifically: #29052
Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc820cafc90>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:662
Expected error:
<exec.CodeExitError>: {
Err: {
s: "error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.167.168 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-bc6d8] [] 0xc8209340c0 Error from server: error when creating \"STDIN\": an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-bc6d8/services\\\"\") has prevented the request from succeeding (post services)\n [] <nil> 0xc8209347c0 exit status 1 <nil> true [0xc82088c008 0xc82088c060 0xc82088c070] [0xc82088c008 0xc82088c060 0xc82088c070] [0xc82088c028 0xc82088c058 0xc82088c068] [0xaf0b90 0xaf0cf0 0xaf0cf0] 0xc820558240}:\nCommand stdout:\n\nstderr:\nError from server: error when creating \"STDIN\": an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-kubectl-bc6d8/services\\\"\") has prevented the request from succeeding (post services)\n\nerror:\nexit status 1\n",
},
Code: 1,
}
error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.155.167.168 --kubeconfig=/workspace/.kube/config create -f - --namespace=e2e-tests-kubectl-bc6d8] [] 0xc8209340c0 Error from server: error when creating "STDIN": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-bc6d8/services\"") has prevented the request from succeeding (post services)
[] <nil> 0xc8209347c0 exit status 1 <nil> true [0xc82088c008 0xc82088c060 0xc82088c070] [0xc82088c008 0xc82088c060 0xc82088c070] [0xc82088c028 0xc82088c058 0xc82088c068] [0xaf0b90 0xaf0cf0 0xaf0cf0] 0xc820558240}:
Command stdout:
stderr:
Error from server: error when creating "STDIN": an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-kubectl-bc6d8/services\"") has prevented the request from succeeding (post services)
error:
exit status 1
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2183
Issues about this test specifically: #28774 #31429
Failed: [k8s.io] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds) {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:222
Expected error:
<*errors.StatusError | 0xc820efea80>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-nslifetest-52-nxbc6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nslifetest-52-nxbc6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-nslifetest-52-nxbc6/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/namespace.go:46
Issues about this test specifically: #27957
Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc820c8d210>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #31918
Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc8218f5770>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #28071
Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 19:10:53.839: Couldn't delete ns: "e2e-tests-downward-api-6pb17": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-downward-api-6pb17/networkpolicies\"") has prevented the request from succeeding (get networkpolicies.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-downward-api-6pb17/networkpolicies\\\"\") has prevented the request from succeeding (get networkpolicies.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82088a550), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:56
Expected error:
<*errors.errorString | 0xc820563e80>: {
s: "error waiting for deployment \"test-new-deployment\" status to match expectation: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-tests-deployment-sodf1/pods?labelSelector=name%3Dnginx\\\"\") has prevented the request from succeeding (get pods)",
}
error waiting for deployment "test-new-deployment" status to match expectation: an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-tests-deployment-sodf1/pods?labelSelector=name%3Dnginx\"") has prevented the request from succeeding (get pods)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:274
Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc821379140>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #28091
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:133
Expected error:
<*errors.StatusError | 0xc8216ad300>: {
ErrStatus: {
TypeMeta: {Kind: "", APIVersion: ""},
ListMeta: {SelfLink: "", ResourceVersion: ""},
Status: "Failure",
Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/watch/namespaces/e2e-tests-resize-nodes-a4rig/serviceaccounts?fieldSelector=metadata.name%3Ddefault\\\"\") has prevented the request from succeeding (get serviceAccounts)",
Reason: "InternalError",
Details: {
Name: "",
Group: "",
Kind: "serviceAccounts",
Causes: [
{
Type: "UnexpectedServerResponse",
Message: "Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-resize-nodes-a4rig/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"",
Field: "",
},
],
RetryAfterSeconds: 0,
},
Code: 500,
},
}
an error on the server ("Internal Server Error: \"/api/v1/watch/namespaces/e2e-tests-resize-nodes-a4rig/serviceaccounts?fieldSelector=metadata.name%3Ddefault\"") has prevented the request from succeeding (get serviceAccounts)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:223
Issues about this test specifically: #27233
Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc821266ee0>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #27655
Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc82184cd90>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #28853 #31585
Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc8210ee2e0>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Issues about this test specifically: #29816 #30018
Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:233
Expected error:
<*errors.errorString | 0xc82114f8a0>: {
s: "Namespace e2e-tests-configmap-b5e2a is active",
}
Namespace e2e-tests-configmap-b5e2a is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:211
Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:134
Sep 30 19:09:40.006: Couldn't delete ns: "e2e-tests-container-probe-6p9dj": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/e2e-tests-container-probe-6p9dj/replicationcontrollers\"") has prevented the request from succeeding (get replicationcontrollers.extensions) (&errors.StatusError{ErrStatus:unversioned.Status{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""}, ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/e2e-tests-container-probe-6p9dj/replicationcontrollers\\\"\") has prevented the request from succeeding (get replicationcontrollers.extensions)", Reason:"InternalError", Details:(*unversioned.StatusDetails)(0xc82032acd0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:338
[FLAKE-PING] @rmmh
This flaky-test issue would love to have more attention.
[FLAKE-PING] @rmmh
This flaky-test issue would love to have more attention.
Failed: https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gci-gke-test/1/
Run so broken it didn't make JUnit output!