kubernetes / kubernetes

Production-Grade Container Scheduling and Management
https://kubernetes.io
Apache License 2.0
110.94k stars 39.63k forks source link

ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new: broken test run #38513

Closed k8s-github-robot closed 7 years ago

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/48/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422277ca0>: {
        s: "Namespace e2e-tests-services-3qqmw is active",
    }
    Namespace e2e-tests-services-3qqmw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422277a20>: {
        s: "Namespace e2e-tests-services-3qqmw is active",
    }
    Namespace e2e-tests-services-3qqmw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224c7670>: {
        s: "Namespace e2e-tests-services-3qqmw is active",
    }
    Namespace e2e-tests-services-3qqmw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212b7320>: {
        s: "Namespace e2e-tests-services-3qqmw is active",
    }
    Namespace e2e-tests-services-3qqmw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422959910>: {
        s: "Namespace e2e-tests-services-3qqmw is active",
    }
    Namespace e2e-tests-services-3qqmw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42001a190>: {s: "unexpected EOF"}
    unexpected EOF
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:194
Expected error:
    <*errors.errorString | 0xc421d05280>: {
        s: "failed to get logs from pod-secrets-08e8841a-be27-11e6-8fd6-0242ac110002 for secret-env-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-secrets-08e8841a-be27-11e6-8fd6-0242ac110002)",
    }
    failed to get logs from pod-secrets-08e8841a-be27-11e6-8fd6-0242ac110002 for secret-env-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-secrets-08e8841a-be27-11e6-8fd6-0242ac110002)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e0b360>: {
        s: "Namespace e2e-tests-services-3qqmw is active",
    }
    Namespace e2e-tests-services-3qqmw is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/53/

Multiple broken tests:

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:59
Expected error:
    <*errors.errorString | 0xc421ad6260>: {
        s: "expected pod \"pod-configmaps-44bcb27a-bf55-11e6-a7b6-0242ac11000a\" success: gave up waiting for pod 'pod-configmaps-44bcb27a-bf55-11e6-a7b6-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-44bcb27a-bf55-11e6-a7b6-0242ac11000a" success: gave up waiting for pod 'pod-configmaps-44bcb27a-bf55-11e6-a7b6-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #32949

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:48
Expected error:
    <*errors.errorString | 0xc420cbee90>: {
        s: "expected pod \"downwardapi-volume-8c891cb1-bf56-11e6-a7b6-0242ac11000a\" success: gave up waiting for pod 'downwardapi-volume-8c891cb1-bf56-11e6-a7b6-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-8c891cb1-bf56-11e6-a7b6-0242ac11000a" success: gave up waiting for pod 'downwardapi-volume-8c891cb1-bf56-11e6-a7b6-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #31836

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65
Expected error:
    <*errors.errorString | 0xc420b518d0>: {
        s: "expected pod \"pod-439593eb-bf46-11e6-a7b6-0242ac11000a\" success: gave up waiting for pod 'pod-439593eb-bf46-11e6-a7b6-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-439593eb-bf46-11e6-a7b6-0242ac11000a" success: gave up waiting for pod 'pod-439593eb-bf46-11e6-a7b6-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #33987

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:113
Expected error:
    <*errors.errorString | 0xc42130b1c0>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:189
Expected error:
    <*errors.errorString | 0xc42127f210>: {
        s: "expected pod \"downwardapi-volume-aa77bc3d-bf4d-11e6-a7b6-0242ac11000a\" success: gave up waiting for pod 'downwardapi-volume-aa77bc3d-bf4d-11e6-a7b6-0242ac11000a' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-aa77bc3d-bf4d-11e6-a7b6-0242ac11000a" success: gave up waiting for pod 'downwardapi-volume-aa77bc3d-bf4d-11e6-a7b6-0242ac11000a' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:56
Expected error:
    <*errors.errorString | 0xc421060de0>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #32122 #38040

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/55/

Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b521b0>: {
        s: "Namespace e2e-tests-services-jrtz9 is active",
    }
    Namespace e2e-tests-services-jrtz9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422794f90>: {
        s: "Namespace e2e-tests-services-jrtz9 is active",
    }
    Namespace e2e-tests-services-jrtz9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e24cd0>: {
        s: "Namespace e2e-tests-services-jrtz9 is active",
    }
    Namespace e2e-tests-services-jrtz9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421726660>: {
        s: "Namespace e2e-tests-services-jrtz9 is active",
    }
    Namespace e2e-tests-services-jrtz9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc420926ff0>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 48, 55],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.48.55:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4209a8020>: {
        s: "Namespace e2e-tests-services-jrtz9 is active",
    }
    Namespace e2e-tests-services-jrtz9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421039570>: {
        s: "Namespace e2e-tests-services-jrtz9 is active",
    }
    Namespace e2e-tests-services-jrtz9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421fcfdc0>: {
        s: "Namespace e2e-tests-services-jrtz9 is active",
    }
    Namespace e2e-tests-services-jrtz9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421722430>: {
        s: "Namespace e2e-tests-services-jrtz9 is active",
    }
    Namespace e2e-tests-services-jrtz9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e17560>: {
        s: "Namespace e2e-tests-services-jrtz9 is active",
    }
    Namespace e2e-tests-services-jrtz9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b0b570>: {
        s: "Namespace e2e-tests-services-jrtz9 is active",
    }
    Namespace e2e-tests-services-jrtz9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422448cd0>: {
        s: "Namespace e2e-tests-services-jrtz9 is active",
    }
    Namespace e2e-tests-services-jrtz9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:68
Expected error:
    <*errors.errorString | 0xc42189f250>: {
        s: "failed to get logs from pod-configmaps-a6824d5e-bfd7-11e6-9bb1-0242ac110007 for configmap-volume-test: an error on the server (\"unknown\") has prevented the request from succeeding (get pods pod-configmaps-a6824d5e-bfd7-11e6-9bb1-0242ac110007)",
    }
    failed to get logs from pod-configmaps-a6824d5e-bfd7-11e6-9bb1-0242ac110007 for configmap-volume-test: an error on the server ("unknown") has prevented the request from succeeding (get pods pod-configmaps-a6824d5e-bfd7-11e6-9bb1-0242ac110007)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a7ee10>: {
        s: "Namespace e2e-tests-services-jrtz9 is active",
    }
    Namespace e2e-tests-services-jrtz9 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/59/

Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4203d0f30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Dec 13 01:30:19.070: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1580

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Dec 12 18:46:19.913: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Dec 12 21:34:04.666: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:629
Dec 13 00:28:47.790: Missing KubeDNS in kubectl cluster-info
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:626

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc421327fb0>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-13 00:33:01 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-13 00:33:33 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-13 00:33:01 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.5 PodIP:10.96.0.42 StartTime:2016-12-13 00:33:01 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc4215502a0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://adb3d1a5367c262535ce2174eab0596f8497721d5ffd5f6d36271ba90534075d}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-13 00:33:01 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-13 00:33:33 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-13 00:33:01 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.5 PodIP:10.96.0.42 StartTime:2016-12-13 00:33:01 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc4215502a0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://adb3d1a5367c262535ce2174eab0596f8497721d5ffd5f6d36271ba90534075d}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Dec 12 21:50:20.980: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203d0f30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:437
Expected error:
    <*errors.errorString | 0xc4203d0f30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #28337

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1098
Dec 13 01:18:01.089: expected un-ready endpoint for Service webserver within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1096

Issues about this test specifically: #26172

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203d0f30>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Dec 12 23:20:13.826: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Dec 12 21:16:07.384: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Dec 12 22:50:58.064: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Dec 12 18:23:42.606: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/60/

Multiple broken tests:

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc42001a190>: {s: "unexpected EOF"}
    unexpected EOF
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224089f0>: {
        s: "Namespace e2e-tests-services-6wdxf is active",
    }
    Namespace e2e-tests-services-6wdxf is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: list nodes {e2e.go}

exit status 1

Issues about this test specifically: #38667

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421729540>: {
        s: "Namespace e2e-tests-services-6wdxf is active",
    }
    Namespace e2e-tests-services-6wdxf is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422294a20>: {
        s: "Namespace e2e-tests-services-6wdxf is active",
    }
    Namespace e2e-tests-services-6wdxf is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b2d310>: {
        s: "Namespace e2e-tests-services-6wdxf is active",
    }
    Namespace e2e-tests-services-6wdxf is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225073c0>: {
        s: "Namespace e2e-tests-services-6wdxf is active",
    }
    Namespace e2e-tests-services-6wdxf is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4224398f0>: {
        s: "Namespace e2e-tests-services-6wdxf is active",
    }
    Namespace e2e-tests-services-6wdxf is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422068860>: {
        s: "Namespace e2e-tests-services-6wdxf is active",
    }
    Namespace e2e-tests-services-6wdxf is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42242c6b0>: {
        s: "Namespace e2e-tests-services-6wdxf is active",
    }
    Namespace e2e-tests-services-6wdxf is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421fbcc10>: {
        s: "Namespace e2e-tests-services-6wdxf is active",
    }
    Namespace e2e-tests-services-6wdxf is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/67/

Multiple broken tests:

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: DiffResources {e2e.go}

Error: 3 leaked resources
[ firewall-rules ]
+k8s-fw-a912d5e59c2f211e682ff42010af0002  bootstrap-e2e  0.0.0.0/0     tcp:80                                  gke-bootstrap-e2e-65b64a53-node
[ forwarding-rules ]
+a912d5e59c2f211e682ff42010af0002  us-central1  35.184.0.81      TCP          us-central1/targetPools/a912d5e59c2f211e682ff42010af0002
[ target-pools ]
+a912d5e59c2f211e682ff42010af0002  us-central1

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Expected error:
    <*errors.errorString | 0xc420efe190>: {
        s: "error running gcloud [container clusters --project=gke-up-g1-3-g1-5-up-clu-n --zone=us-central1-a upgrade bootstrap-e2e --cluster-version=1.5.2-beta.0.13+09cb1a9df8455c --quiet --image-type=gci]; got error exit status 1, stdout \"\", stderr \"Upgrading bootstrap-e2e...\\n....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.\\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\\n name: u'operation-1481826059733-52f9b172'\\n operationType: OperationTypeValueValuesEnum(UPGRADE_NODES, 4)\\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/61807208001/zones/us-central1-a/operations/operation-1481826059733-52f9b172'\\n status: StatusValueValuesEnum(DONE, 3)\\n statusMessage: u'cloud-kubernetes::UNKNOWN: Get https://130.211.227.212/api/v1/nodes/gke-bootstrap-e2e-default-pool-45590978-3en8: harpoon unreachable error UNREACHABLE_CONNECT_REFUSED\\\\ngoroutine 1220178 [running]:\\\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc437617e60, 0x8c, 0x1, 0x10)\\\\n\\\\tcloud/kubernetes/common/errors.go:627 +0x22f\\\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2c3bc60, 0xc4311038f0, 0xc422ee87b0)\\\\n\\\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42e17a2f0, 0x1, 0x1, 0x0, 0x1)\\\\n\\\\tcloud/kubernetes/common/errors.go:852 +0x12b\\\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42da6ea00, 0xc42ad31980, 0xc428fe41e0, 0x3, 0x4, 0x2, 0x4)\\\\n\\\\tcloud/kubernetes/common/call.go:130 +0x608\\\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc429262660, 0x7f99789533e8, 0xc42f673fb0, 0xc42ec82820, 0xc4371e1700, 0xc436fef2d0, 0xc42ad31980, 0xc4305a0d00, 0xc5, 0xc424151100, ...)\\\\n\\\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42ca37d40, 0x7f99789533e8, 0xc42f673fb0, 0xc42ad31980, 0x7f9978662560, 0xc436fef260, 0xc42ec82820, 0xc4371e1700, 0xc436fef2d0, 0xc4305a0d00, ...)\\\\n\\\\tcloud/kubernetes/server/deploy.go:1844 +0xdc\\\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42ca37d40, 0x7f99789533e8, 0xc42f673fb0, 0xc42ad31980, 0x7f9978662560, 0xc436fef260, 0xc42a8f5050, 0xc42ec82820, 0xc4371e1580, 0xc424151100, ...)\\\\n\\\\tcloud/kubernetes/server/deploy.go:1781 +0xb5e\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2c80e00, 0xc4294e6b60, 0x3, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1206 +0x3e5\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2c80e00, 0xc4294e6b60, 0x0, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1077 +0x108\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0xc400000002, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2c80e00, 0xc4294e6b60, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:951 +0x3d4\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0x2c80e00, 0xc4294e6b60, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1973 +0xca\\\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc42ad318c0, 0xc4294e6c40, 0x2c80e00, 0xc4294e6b60, 0xc42e0fec04, 0xc, 0xc400000002, 0xc42e0eb830, 0xc42cb8ecb0, 0x7f99789533e8, ...)\\\\n\\\\tcloud/kubernetes/server/server.go:1965 +0x2fd\\\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\\\n\\\\tcloud/kubernetes/server/server.go:1967 +0xc44\\\\n'\\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/61807208001/zones/us-central1-a/clusters/bootstrap-e2e/nodePools/default-pool'\\n zone: u'us-central1-a'>] finished with error: cloud-kubernetes::UNKNOWN: Get https://130.211.227.212/api/v1/nodes/gke-bootstrap-e2e-default-pool-45590978-3en8: harpoon unreachable error UNREACHABLE_CONNECT_REFUSED\\ngoroutine 1220178 [running]:\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc437617e60, 0x8c, 0x1, 0x10)\\n\\tcloud/kubernetes/common/errors.go:627 +0x22f\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2c3bc60, 0xc4311038f0, 0xc422ee87b0)\\n\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42e17a2f0, 0x1, 0x1, 0x0, 0x1)\\n\\tcloud/kubernetes/common/errors.go:852 +0x12b\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42da6ea00, 0xc42ad31980, 0xc428fe41e0, 0x3, 0x4, 0x2, 0x4)\\n\\tcloud/kubernetes/common/call.go:130 +0x608\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc429262660, 0x7f99789533e8, 0xc42f673fb0, 0xc42ec82820, 0xc4371e1700, 0xc436fef2d0, 0xc42ad31980, 0xc4305a0d00, 0xc5, 0xc424151100, ...)\\n\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42ca37d40, 0x7f99789533e8, 0xc42f673fb0, 0xc42ad31980, 0x7f9978662560, 0xc436fef260, 0xc42ec82820, 0xc4371e1700, 0xc436fef2d0, 0xc4305a0d00, ...)\\n\\tcloud/kubernetes/server/deploy.go:1844 +0xdc\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42ca37d40, 0x7f99789533e8, 0xc42f673fb0, 0xc42ad31980, 0x7f9978662560, 0xc436fef260, 0xc42a8f5050, 0xc42ec82820, 0xc4371e1580, 0xc424151100, ...)\\n\\tcloud/kubernetes/server/deploy.go:1781 +0xb5e\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2c80e00, 0xc4294e6b60, 0x3, ...)\\n\\tcloud/kubernetes/server/server.go:1206 +0x3e5\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2c80e00, 0xc4294e6b60, 0x0, ...)\\n\\tcloud/kubernetes/server/server.go:1077 +0x108\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0xc400000002, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2c80e00, 0xc4294e6b60, ...)\\n\\tcloud/kubernetes/server/server.go:951 +0x3d4\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0x2c80e00, 0xc4294e6b60, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2, ...)\\n\\tcloud/kubernetes/server/server.go:1973 +0xca\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc42ad318c0, 0xc4294e6c40, 0x2c80e00, 0xc4294e6b60, 0xc42e0fec04, 0xc, 0xc400000002, 0xc42e0eb830, 0xc42cb8ecb0, 0x7f99789533e8, ...)\\n\\tcloud/kubernetes/server/server.go:1965 +0x2fd\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\n\\tcloud/kubernetes/server/server.go:1967 +0xc44\\n\\n\"",
    }
    error running gcloud [container clusters --project=gke-up-g1-3-g1-5-up-clu-n --zone=us-central1-a upgrade bootstrap-e2e --cluster-version=1.5.2-beta.0.13+09cb1a9df8455c --quiet --image-type=gci]; got error exit status 1, stdout "", stderr "Upgrading bootstrap-e2e...\n....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\n name: u'operation-1481826059733-52f9b172'\n operationType: OperationTypeValueValuesEnum(UPGRADE_NODES, 4)\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/61807208001/zones/us-central1-a/operations/operation-1481826059733-52f9b172'\n status: StatusValueValuesEnum(DONE, 3)\n statusMessage: u'cloud-kubernetes::UNKNOWN: Get https://130.211.227.212/api/v1/nodes/gke-bootstrap-e2e-default-pool-45590978-3en8: harpoon unreachable error UNREACHABLE_CONNECT_REFUSED\\ngoroutine 1220178 [running]:\\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc437617e60, 0x8c, 0x1, 0x10)\\n\\tcloud/kubernetes/common/errors.go:627 +0x22f\\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2c3bc60, 0xc4311038f0, 0xc422ee87b0)\\n\\tcloud/kubernetes/common/errors.go:681 +0x1ac\\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42e17a2f0, 0x1, 0x1, 0x0, 0x1)\\n\\tcloud/kubernetes/common/errors.go:852 +0x12b\\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42da6ea00, 0xc42ad31980, 0xc428fe41e0, 0x3, 0x4, 0x2, 0x4)\\n\\tcloud/kubernetes/common/call.go:130 +0x608\\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc429262660, 0x7f99789533e8, 0xc42f673fb0, 0xc42ec82820, 0xc4371e1700, 0xc436fef2d0, 0xc42ad31980, 0xc4305a0d00, 0xc5, 0xc424151100, ...)\\n\\tcloud/kubernetes/server/updater/updater.go:70 +0x693\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42ca37d40, 0x7f99789533e8, 0xc42f673fb0, 0xc42ad31980, 0x7f9978662560, 0xc436fef260, 0xc42ec82820, 0xc4371e1700, 0xc436fef2d0, 0xc4305a0d00, ...)\\n\\tcloud/kubernetes/server/deploy.go:1844 +0xdc\\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42ca37d40, 0x7f99789533e8, 0xc42f673fb0, 0xc42ad31980, 0x7f9978662560, 0xc436fef260, 0xc42a8f5050, 0xc42ec82820, 0xc4371e1580, 0xc424151100, ...)\\n\\tcloud/kubernetes/server/deploy.go:1781 +0xb5e\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2c80e00, 0xc4294e6b60, 0x3, ...)\\n\\tcloud/kubernetes/server/server.go:1206 +0x3e5\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2c80e00, 0xc4294e6b60, 0x0, ...)\\n\\tcloud/kubernetes/server/server.go:1077 +0x108\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0xc400000002, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2c80e00, 0xc4294e6b60, ...)\\n\\tcloud/kubernetes/server/server.go:951 +0x3d4\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0x2c80e00, 0xc4294e6b60, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2, ...)\\n\\tcloud/kubernetes/server/server.go:1973 +0xca\\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc42ad318c0, 0xc4294e6c40, 0x2c80e00, 0xc4294e6b60, 0xc42e0fec04, 0xc, 0xc400000002, 0xc42e0eb830, 0xc42cb8ecb0, 0x7f99789533e8, ...)\\n\\tcloud/kubernetes/server/server.go:1965 +0x2fd\\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\\n\\tcloud/kubernetes/server/server.go:1967 +0xc44\\n'\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/61807208001/zones/us-central1-a/clusters/bootstrap-e2e/nodePools/default-pool'\n zone: u'us-central1-a'>] finished with error: cloud-kubernetes::UNKNOWN: Get https://130.211.227.212/api/v1/nodes/gke-bootstrap-e2e-default-pool-45590978-3en8: harpoon unreachable error UNREACHABLE_CONNECT_REFUSED\ngoroutine 1220178 [running]:\ngoogle3/cloud/kubernetes/common/errors.newStatus(0xc4000003e7, 0xc437617e60, 0x8c, 0x1, 0x10)\n\tcloud/kubernetes/common/errors.go:627 +0x22f\ngoogle3/cloud/kubernetes/common/errors.ToStatus(0x2c3bc60, 0xc4311038f0, 0xc422ee87b0)\n\tcloud/kubernetes/common/errors.go:681 +0x1ac\ngoogle3/cloud/kubernetes/common/errors.Combine(0xc42e17a2f0, 0x1, 0x1, 0x0, 0x1)\n\tcloud/kubernetes/common/errors.go:852 +0x12b\ngoogle3/cloud/kubernetes/common/call.InParallel(0x1, 0xc42da6ea00, 0xc42ad31980, 0xc428fe41e0, 0x3, 0x4, 0x2, 0x4)\n\tcloud/kubernetes/common/call.go:130 +0x608\ngoogle3/cloud/kubernetes/server/updater/updater.(*Updater).UpdateNPI(0xc429262660, 0x7f99789533e8, 0xc42f673fb0, 0xc42ec82820, 0xc4371e1700, 0xc436fef2d0, 0xc42ad31980, 0xc4305a0d00, 0xc5, 0xc424151100, ...)\n\tcloud/kubernetes/server/updater/updater.go:70 +0x693\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).updateNPI(0xc42ca37d40, 0x7f99789533e8, 0xc42f673fb0, 0xc42ad31980, 0x7f9978662560, 0xc436fef260, 0xc42ec82820, 0xc4371e1700, 0xc436fef2d0, 0xc4305a0d00, ...)\n\tcloud/kubernetes/server/deploy.go:1844 +0xdc\ngoogle3/cloud/kubernetes/server/deploy.(*Deployer).UpdateNodes(0xc42ca37d40, 0x7f99789533e8, 0xc42f673fb0, 0xc42ad31980, 0x7f9978662560, 0xc436fef260, 0xc42a8f5050, 0xc42ec82820, 0xc4371e1580, 0xc424151100, ...)\n\tcloud/kubernetes/server/deploy.go:1781 +0xb5e\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodesInternal(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2c80e00, 0xc4294e6b60, 0x3, ...)\n\tcloud/kubernetes/server/server.go:1206 +0x3e5\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpgradeNodes(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2c80e00, 0xc4294e6b60, 0x0, ...)\n\tcloud/kubernetes/server/server.go:1077 +0x108\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).clusterUpdate(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0xc400000002, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2c80e00, 0xc4294e6b60, ...)\n\tcloud/kubernetes/server/server.go:951 +0x3d4\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).updateClusterInternal(0xc42cb8ecb0, 0x7f99789533e8, 0xc42f652f90, 0xc42ad31980, 0x2c80e00, 0xc4294e6b60, 0xc42ec82820, 0xc4371e1580, 0xc4320e7f20, 0x2, ...)\n\tcloud/kubernetes/server/server.go:1973 +0xca\ngoogle3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster.func1(0xc42ad318c0, 0xc4294e6c40, 0x2c80e00, 0xc4294e6b60, 0xc42e0fec04, 0xc, 0xc400000002, 0xc42e0eb830, 0xc42cb8ecb0, 0x7f99789533e8, ...)\n\tcloud/kubernetes/server/server.go:1965 +0x2fd\ncreated by google3/cloud/kubernetes/server/server.(*ClusterServer).UpdateCluster\n\tcloud/kubernetes/server/server.go:1967 +0xc44\n\n"
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:93

Issues about this test specifically: #38172

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: kubectl version {e2e.go}

exit status 1

Issues about this test specifically: #34378

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/80/

Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:175
Expected error:
    <*errors.StatusError | 0xc421428980>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: ""},
            Status: "Failure",
            Message: "an error on the server (\"Error: 'ssh: rejected: connect failed (Connection timed out)'\\nTrying to reach: 'http://10.96.1.14:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'\") has prevented the request from succeeding (post services rs-ctrl)",
            Reason: "InternalError",
            Details: {
                Name: "rs-ctrl",
                Group: "",
                Kind: "services",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.1.14:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 503,
        },
    }
    an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.1.14:8080/ConsumeMem?durationSec=30&megabytes=0&requestSizeMegabytes=100'") has prevented the request from succeeding (post services rs-ctrl)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:227

Issues about this test specifically: #33730 #37417

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1098
Dec 19 12:43:07.915: expected un-ready endpoint for Service webserver within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1096

Issues about this test specifically: #26172

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Dec 19 14:08:44.364: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc421476ae0>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-19 11:57:58 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-19 11:58:30 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-19 11:57:58 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.96.2.250 StartTime:2016-12-19 11:57:58 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc421d18850} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://c701c5014a0bfd61dc7845a461d2924f97f395a4fc5f2d31d97ff83c4511fa5c}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-19 11:57:58 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-19 11:58:30 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2016-12-19 11:57:58 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.2 PodIP:10.96.2.250 StartTime:2016-12-19 11:57:58 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc421d18850} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://c701c5014a0bfd61dc7845a461d2924f97f395a4fc5f2d31d97ff83c4511fa5c}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Dec 19 13:50:47.818: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.3.233:8080/dial?request=hostName&protocol=http&host=10.99.244.58&port=80&tries=1'
retrieved map[netserver-0:{} netserver-1:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #33887

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:59
Dec 19 14:46:24.142: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc4212aa260>: {
        s: "service verification failed for: 10.99.251.127\nexpected [service1-nw9m2 service1-v6wml service1-w84hx]\nreceived [service1-v6wml service1-w84hx]",
    }
    service verification failed for: 10.99.251.127
    expected [service1-nw9m2 service1-v6wml service1-w84hx]
    received [service1-v6wml service1-w84hx]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:428

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.72.159 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-wk3lj execpod-sourceip-gke-bootstrap-e2e-default-pool-21255773-88kj2m -- /bin/sh -c wget -T 30 -qO- 10.99.244.70:8080 | grep client_address] []  <nil>  wget: download timed out\n [] <nil> 0xc4220ea4e0 exit status 1 <nil> <nil> true [0xc420170150 0xc420170170 0xc420170198] [0xc420170150 0xc420170170 0xc420170198] [0xc420170160 0xc420170188] [0x970e80 0x970e80] 0xc422350240 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.72.159 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-wk3lj execpod-sourceip-gke-bootstrap-e2e-default-pool-21255773-88kj2m -- /bin/sh -c wget -T 30 -qO- 10.99.244.70:8080 | grep client_address] []  <nil>  wget: download timed out
     [] <nil> 0xc4220ea4e0 exit status 1 <nil> <nil> true [0xc420170150 0xc420170170 0xc420170198] [0xc420170150 0xc420170170 0xc420170198] [0xc420170160 0xc420170188] [0x970e80 0x970e80] 0xc422350240 <nil>}:
    Command stdout:

    stderr:
    wget: download timed out

    error:
    exit status 1

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:88
Dec 19 13:30:21.564: timeout waiting 15m0s for pods size to be 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Dec 19 13:02:25.899: Frontend service did not start serving content in 600 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1580

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:70
Dec 19 11:19:56.218: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Dec 19 11:25:44.603: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.191:8080/dial?request=hostName&protocol=http&host=10.96.1.130&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203acc70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Dec 19 13:36:19.776: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 104.198.240.14 30616
retrieved map[netserver-0:{} netserver-2:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33285

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Dec 19 13:41:24.311: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.99.251.6:80/hostName
retrieved map[netserver-1:{} netserver-0:{}]
expected map[netserver-1:{} netserver-2:{} netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Dec 19 14:15:15.329: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.96.1.16:8080/hostName
retrieved map[]
expected map[netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203acc70>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc421b86790>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:423

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Dec 19 12:03:03.025: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.8:8080/dial?request=hostName&protocol=udp&host=10.99.243.109&port=90&tries=1'
retrieved map[netserver-2:{} netserver-0:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #34250

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/88/ Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422398280>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42262acf0>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421d1e390>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4225bd090>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420ea1930>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4222c8930>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42128bd40>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/expansion.go:131
Expected error:
    <*errors.errorString | 0xc42101aac0>: {
        s: "failed to get logs from var-expansion-5e3c661c-c827-11e6-8443-0242ac11000a for dapi-container: an error on the server (\"unknown\") has prevented the request from succeeding (get pods var-expansion-5e3c661c-c827-11e6-8443-0242ac11000a)",
    }
    failed to get logs from var-expansion-5e3c661c-c827-11e6-8443-0242ac11000a for dapi-container: an error on the server ("unknown") has prevented the request from succeeding (get pods var-expansion-5e3c661c-c827-11e6-8443-0242ac11000a)
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #28503

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e98df0>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214ef440>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421518280>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc421242770>: {
        s: "error while stopping RC: service2: Get https://35.184.48.55/api/v1/namespaces/e2e-tests-services-59pj6/replicationcontrollers/service2: read tcp 172.17.0.10:40769->35.184.48.55:443: read: connection reset by peer",
    }
    error while stopping RC: service2: Get https://35.184.48.55/api/v1/namespaces/e2e-tests-services-59pj6/replicationcontrollers/service2: read tcp 172.17.0.10:40769->35.184.48.55:443: read: connection reset by peer
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e74e00>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422952610>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42149f520>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421553cf0>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221a3780>: {
        s: "Namespace e2e-tests-services-59pj6 is active",
    }
    Namespace e2e-tests-services-59pj6 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/90/ Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Expected error:
    <*errors.errorString | 0xc42038adb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #34064

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:123
Dec 22 17:37:14.942: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.99.250.101 90
retrieved map[netserver-2:{} netserver-1:{}]
expected map[netserver-1:{} netserver-2:{} netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #36271

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:485
Expected error:
    <*errors.errorString | 0xc42038adb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3602

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:358
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc42190c560>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:326

Issues about this test specifically: #37479

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Expected error:
    <*errors.errorString | 0xc42038adb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #32375

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
Expected error:
    <*errors.errorString | 0xc42038adb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #34317

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc42038adb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #36178

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/privileged.go:65
Expected error:
    <*errors.errorString | 0xc42038adb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Dec 22 17:40:24.741: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 104.197.29.123 32695
retrieved map[]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33285

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:175
Expected error:
    <*errors.errorString | 0xc42038adb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] Services should release NodePorts on delete {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1040
Expected error:
    <*errors.errorString | 0xc42038adb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:3602

Issues about this test specifically: #37274

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:74
Expected error:
    <*errors.errorString | 0xc421a404d0>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:477

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.185.233 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-vjjmt execpod-sourceip-gke-bootstrap-e2e-default-pool-19166ee6-lcg6d6 -- /bin/sh -c wget -T 30 -qO- 10.99.249.185:8080 | grep client_address] []  <nil>  wget: download timed out\n [] <nil> 0xc421996540 exit status 1 <nil> <nil> true [0xc42064a068 0xc42064a0c8 0xc42064a0e8] [0xc42064a068 0xc42064a0c8 0xc42064a0e8] [0xc42064a0b8 0xc42064a0e0] [0x970e80 0x970e80] 0xc421360300 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/platforms/linux/amd64/kubectl [kubectl --server=https://104.198.185.233 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-vjjmt execpod-sourceip-gke-bootstrap-e2e-default-pool-19166ee6-lcg6d6 -- /bin/sh -c wget -T 30 -qO- 10.99.249.185:8080 | grep client_address] []  <nil>  wget: download timed out
     [] <nil> 0xc421996540 exit status 1 <nil> <nil> true [0xc42064a068 0xc42064a0c8 0xc42064a0e8] [0xc42064a068 0xc42064a0c8 0xc42064a0e8] [0xc42064a0b8 0xc42064a0e0] [0x970e80 0x970e80] 0xc421360300 <nil>}:
    Command stdout:

    stderr:
    wget: download timed out

    error:
    exit status 1

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #31085 #34207 #37097

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/91/ Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216e7a10>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a48da0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422016bc0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c6fb80>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:38:38 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:38:38 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42210b650>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421828990>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421fe78a0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc421f74e80>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc421b947c0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421345570>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:38:38 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:38:38 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c88170>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:38:38 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:38:38 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220a6610>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c09940>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:38:38 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:38:38 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Dec 23 02:53:12.626: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422058220>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e5b9b0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4219d1780>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421af6520>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421945120>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421828c40>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-51ee27de-wfj6 gke-bootstrap-e2e-default-pool-51ee27de-wfj6 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:36:19 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-22 23:37:10 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 02:46:49 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93
k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/92/ Multiple broken tests:

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 11:53:47.782: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215d1400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 14:11:04.566: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422318000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 13:26:58.924: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421682a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34658

Failed: [k8s.io] ResourceQuota should verify ResourceQuota with best effort scope. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 12:36:32.480: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e1f400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31635 #38387

Failed: [k8s.io] Deployment RollingUpdateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 12:56:17.810: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42232a000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31075 #36286 #38041

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 16:41:01.522: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422311400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 14:47:07.133: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42131aa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Expected error:
    <*errors.errorString | 0xc421b89130>: {
        s: "at least one node failed to be ready",
    }
    at least one node failed to be ready
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:397

Issues about this test specifically: #37373

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203d3fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 12:39:48.150: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210b2000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 12:59:35.011: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42167e000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 13:55:27.294: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214cea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36109

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f01920>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc421eff5c0>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-9c473aed-d94c boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-9c473aed-d94c boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a configMap. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 15:54:34.958: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421dc0000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34367

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421cc1560>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9c473aed-d94c gke-bootstrap-e2e-default-pool-9c473aed-d94c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:34:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:42:53 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9c473aed-d94c            gke-bootstrap-e2e-default-pool-9c473aed-d94c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:34:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:54:46 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:42:53 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9c473aed-d94c gke-bootstrap-e2e-default-pool-9c473aed-d94c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:34:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:42:53 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-9c473aed-d94c            gke-bootstrap-e2e-default-pool-9c473aed-d94c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:34:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:54:46 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:42:53 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with mappings and Item mode set[Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 16:10:57.593: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c7d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35790

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
Dec 23 12:25:47.786: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[an error on the server ("Error: 'ssh: rejected: connect failed (Connection refused)'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-9c473aed-d94c:10250/stats/?timeout=5m0s'") has prevented the request from succeeding]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:240
Expected success, but got an error:
    <*errors.errorString | 0xc4203d3fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node_problem_detector.go:232

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] V1Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 12:15:06.888: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b78a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29976 #30464 #30687

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 14:43:17.860: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211baa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 13:34:29.483: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4223fea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 11:57:08.557: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c80000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32584

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 16:20:49.586: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211b4000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:343
Expected error:
    <*errors.errorString | 0xc4213f72b0>: {
        s: "timeout waiting 10m0s for cluster size to be 4",
    }
    timeout waiting 10m0s for cluster size to be 4
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:336

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 12:32:36.235: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42233d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29066 #30592 #31065 #33171

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 13:37:57.199: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211baa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26955

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 13:03:35.654: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216f2000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 13:41:14.172: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420ff2000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42230f810>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9c473aed-d94c gke-bootstrap-e2e-default-pool-9c473aed-d94c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:34:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:42:53 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9c473aed-d94c            gke-bootstrap-e2e-default-pool-9c473aed-d94c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:34:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:54:46 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:42:53 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9c473aed-d94c gke-bootstrap-e2e-default-pool-9c473aed-d94c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:34:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:42:53 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-9c473aed-d94c            gke-bootstrap-e2e-default-pool-9c473aed-d94c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:34:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:54:46 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:42:53 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] V1Job should scale a job down {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 13:51:47.051: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a44000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30216 #31031 #32086

Failed: [k8s.io] ReplicationController should surface a failure condition on a common issue like exceeded quota {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 16:24:02.826: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42181d400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37027

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 12:07:31.461: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a93400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27232

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 16:27:14.974: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211d3400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37423

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42233eab0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9c473aed-d94c gke-bootstrap-e2e-default-pool-9c473aed-d94c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:34:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:42:53 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-9c473aed-d94c            gke-bootstrap-e2e-default-pool-9c473aed-d94c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:34:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:54:46 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:42:53 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-9c473aed-d94c gke-bootstrap-e2e-default-pool-9c473aed-d94c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:34:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:43:37 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:42:53 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-9c473aed-d94c            gke-bootstrap-e2e-default-pool-9c473aed-d94c Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:34:01 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:54:46 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-23 08:42:53 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 13:44:31.547: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211d2a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 13:58:40.006: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4223ff400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] Downward API volume should set DefaultMode on files [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 11:49:38.689: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4223ff400)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36300

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 16:17:22.410: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b8aa00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28774 #31429

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 13:47:58.792: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42232a000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29994

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 12:04:10.236: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42119ea00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35297

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 12:00:59.969: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c7ca00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Expected error:
    <*errors.errorString | 0xc4203d3fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34250

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 13:30:11.122: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211b4000)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 14:36:39.677: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420be8a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32023

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 16:14:09.732: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421086a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29052

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Expected error:
    <*errors.errorString | 0xc4203d3fb0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34064

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Dec 23 14:40:02.097: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a92a00)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30981

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/98/ Multiple broken tests:

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:81
Dec 25 15:23:21.821: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:163

Issues about this test specifically: #30981

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Dec 25 14:25:44.852: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.4.19:8080/dial?request=hostName&protocol=udp&host=10.99.255.149&port=90&tries=1'
retrieved map[]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #34250

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Dec 25 15:16:49.489: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.96.4.54:8080/hostName
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:272
0 (0; 2m7.318909738s): path /api/v1/namespaces/e2e-tests-proxy-8hw3m/pods/https:proxy-service-6gl1g-5f73z:443/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'https://10.96.4.72:443/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'https://10.96.4.72:443/' }],RetryAfterSeconds:0,} Code:503}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:270

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4203fb8e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Dec 25 15:26:30.991: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://104.154.98.232:32322/hostName
retrieved map[]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Dec 25 17:36:29.091: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.4.102:8080/dial?request=hostName&protocol=http&host=10.96.4.101&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Dec 25 15:37:03.207: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1585

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Dec 25 16:01:22.308: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc421fc4130>: {
        s: "service verification failed for: 10.99.254.241\nexpected [service1-45nlb service1-c3t9h service1-hbpsc]\nreceived [service1-c3t9h wget: download timed out]",
    }
    service verification failed for: 10.99.254.241
    expected [service1-45nlb service1-c3t9h service1-hbpsc]
    received [service1-c3t9h wget: download timed out]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:428

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Dec 25 15:11:22.559: Number of replicas has changed: expected 3, got 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:292

Issues about this test specifically: #28657 #30519 #33878

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/102/ Multiple broken tests:

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203cf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:52
Expected error:
    <*errors.errorString | 0xc4203cf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #33631 #33995 #34970

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:808
Dec 27 01:01:58.452: Could not reach HTTP service through 104.198.246.169:30351 after 5m0s: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2443

Issues about this test specifically: #26134

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc4203cf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #33285

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Dec 27 00:45:54.753: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1585

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
Expected error:
    <*errors.errorString | 0xc4203cf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #34317

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Expected error:
    <*errors.errorString | 0xc4203cf060>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:423

Issues about this test specifically: #33887

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Dec 27 00:52:48.316: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.6.118:8080/dial?request=hostName&protocol=http&host=10.96.5.63&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/107/ Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212ce0b0>: {
        s: "Namespace e2e-tests-services-vspcn is active",
    }
    Namespace e2e-tests-services-vspcn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c09720>: {
        s: "Namespace e2e-tests-services-vspcn is active",
    }
    Namespace e2e-tests-services-vspcn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421034d20>: {
        s: "Namespace e2e-tests-services-vspcn is active",
    }
    Namespace e2e-tests-services-vspcn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e7a6a0>: {
        s: "Namespace e2e-tests-services-vspcn is active",
    }
    Namespace e2e-tests-services-vspcn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*url.Error | 0xc4220335c0>: {
        Op: "Get",
        URL: "https://35.184.3.99/api/v1/namespaces/e2e-tests-services-vspcn/services/service2",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 3, 99],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://35.184.3.99/api/v1/namespaces/e2e-tests-services-vspcn/services/service2: dial tcp 35.184.3.99:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:444

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42254f980>: {
        s: "Namespace e2e-tests-services-vspcn is active",
    }
    Namespace e2e-tests-services-vspcn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420e7b1c0>: {
        s: "Namespace e2e-tests-services-vspcn is active",
    }
    Namespace e2e-tests-services-vspcn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28091 #38346

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/108/ Multiple broken tests:

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Dec 28 15:49:31.334: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2562

Issues about this test specifically: #38172

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc42223fd80>: {
        s: "error waiting for deployment \"test-rollover-deployment\" status to match expectation: timed out waiting for the condition",
    }
    error waiting for deployment "test-rollover-deployment" status to match expectation: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:598

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275

Failed: [k8s.io] Kubelet [Serial] [Slow] [k8s.io] regular resource usage tracking resource tracking for 100 pods per node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:270
Dec 28 22:42:12.531: CPU usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-871a41e1-v40m:
 container "kubelet": expected 95th% usage < 0.500; got 0.511
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubelet_perf.go:188

Issues about this test specifically: #26982 #32214 #33994 #34035 #35399 #38209

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/115/ Multiple broken tests:

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:56
Expected error:
    <*errors.errorString | 0xc422171290>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #32122 #38040

Failed: [k8s.io] Downward API volume should provide container's memory limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:171
Expected error:
    <*errors.errorString | 0xc423819a60>: {
        s: "expected pod \"downwardapi-volume-34013f64-cf5c-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'downwardapi-volume-34013f64-cf5c-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-34013f64-cf5c-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'downwardapi-volume-34013f64-cf5c-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] EmptyDir volumes should support (root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:97
Expected error:
    <*errors.errorString | 0xc422984c10>: {
        s: "expected pod \"pod-7808a15f-cf65-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'pod-7808a15f-cf65-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-7808a15f-cf65-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'pod-7808a15f-cf65-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:113
Expected error:
    <*errors.errorString | 0xc421b53730>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:65
Expected error:
    <*errors.errorString | 0xc421af6950>: {
        s: "expected pod \"pod-195e64c9-cf62-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'pod-195e64c9-cf62-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-195e64c9-cf62-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'pod-195e64c9-cf62-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #33987

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310
Dec 31 07:04:16.000: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:82
Expected error:
    <*errors.errorString | 0xc42209df70>: {
        s: "expected pod \"pod-host-path-test\" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-host-path-test" success: gave up waiting for pod 'pod-host-path-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] EmptyDir volumes should support (root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:101
Expected error:
    <*errors.errorString | 0xc422171660>: {
        s: "expected pod \"pod-635abd9e-cf64-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'pod-635abd9e-cf64-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-635abd9e-cf64-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'pod-635abd9e-cf64-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37439

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:150
Expected error:
    <*errors.errorString | 0xc421b52990>: {
        s: "expected pod \"pod-secrets-54d054b2-cf57-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'pod-secrets-54d054b2-cf57-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-54d054b2-cf57-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'pod-secrets-54d054b2-cf57-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:153
Expected error:
    <*errors.errorString | 0xc4203fbd50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
Expected error:
    <*errors.errorString | 0xc422984650>: {
        s: "expected pod \"pod-secrets-faaa8525-cf66-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'pod-secrets-faaa8525-cf66-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-faaa8525-cf66-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'pod-secrets-faaa8525-cf66-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29221

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc4234dfe60>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:15, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63618791677, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63618791677, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:3, Replicas:23, UpdatedReplicas:15, AvailableReplicas:18, UnavailableReplicas:5, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63618791677, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63618791677, nsec:0, loc:(*time.Location)(0x3cea0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1120

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:51
Expected error:
    <*errors.errorString | 0xc421bc5bd0>: {
        s: "expected pod \"pod-secrets-94882011-cf4d-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'pod-secrets-94882011-cf4d-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-94882011-cf4d-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'pod-secrets-94882011-cf4d-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:189
Expected error:
    <*errors.errorString | 0xc421b9d5f0>: {
        s: "expected pod \"downwardapi-volume-e38df2d6-cf68-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'downwardapi-volume-e38df2d6-cf68-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-e38df2d6-cf68-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'downwardapi-volume-e38df2d6-cf68-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:51
Expected error:
    <*errors.errorString | 0xc42294b5f0>: {
        s: "expected pod \"pod-configmaps-cf477964-cf50-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'pod-configmaps-cf477964-cf50-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-cf477964-cf50-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'pod-configmaps-cf477964-cf50-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #27245

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:105
Expected error:
    <*errors.errorString | 0xc422984430>: {
        s: "expected pod \"pod-989af429-cf60-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'pod-989af429-cf60-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-989af429-cf60-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'pod-989af429-cf60-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #26780

Failed: [k8s.io] ConfigMap should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:37
Expected error:
    <*errors.errorString | 0xc422708a60>: {
        s: "expected pod \"pod-configmaps-05e18289-cf48-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'pod-configmaps-05e18289-cf48-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-05e18289-cf48-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'pod-configmaps-05e18289-cf48-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29052

Failed: [k8s.io] ConfigMap should be consumable in multiple volumes in the same pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:269
Expected error:
    <*errors.errorString | 0xc422708400>: {
        s: "expected pod \"pod-configmaps-d6bab1ab-cf4c-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'pod-configmaps-d6bab1ab-cf4c-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-configmaps-d6bab1ab-cf4c-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'pod-configmaps-d6bab1ab-cf4c-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37515

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:40
Expected error:
    <*errors.errorString | 0xc42298ce90>: {
        s: "expected pod \"pod-secrets-95477bf1-cf54-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'pod-secrets-95477bf1-cf54-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-secrets-95477bf1-cf54-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'pod-secrets-95477bf1-cf54-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #35256

Failed: [k8s.io] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:203
Expected error:
    <*errors.errorString | 0xc421ceca70>: {
        s: "expected pod \"downwardapi-volume-3899fb13-cf4f-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'downwardapi-volume-3899fb13-cf4f-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-3899fb13-cf4f-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'downwardapi-volume-3899fb13-cf4f-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37531

Failed: [k8s.io] Downward API volume should set mode on item file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:68
Expected error:
    <*errors.errorString | 0xc421af6070>: {
        s: "expected pod \"downwardapi-volume-949e16a4-cf5d-11e6-b8c1-0242ac110002\" success: gave up waiting for pod 'downwardapi-volume-949e16a4-cf5d-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "downwardapi-volume-949e16a4-cf5d-11e6-b8c1-0242ac110002" success: gave up waiting for pod 'downwardapi-volume-949e16a4-cf5d-11e6-b8c1-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37423

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/122/ Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421705c50>: {
        s: "Namespace e2e-tests-services-wt9fv is active",
    }
    Namespace e2e-tests-services-wt9fv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420f89520>: {
        s: "Namespace e2e-tests-services-wt9fv is active",
    }
    Namespace e2e-tests-services-wt9fv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216ba770>: {
        s: "Namespace e2e-tests-services-wt9fv is active",
    }
    Namespace e2e-tests-services-wt9fv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42176d9a0>: {
        s: "Namespace e2e-tests-services-wt9fv is active",
    }
    Namespace e2e-tests-services-wt9fv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421228320>: {
        s: "Namespace e2e-tests-services-wt9fv is active",
    }
    Namespace e2e-tests-services-wt9fv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216b8890>: {
        s: "Namespace e2e-tests-services-wt9fv is active",
    }
    Namespace e2e-tests-services-wt9fv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421133730>: {
        s: "Namespace e2e-tests-services-wt9fv is active",
    }
    Namespace e2e-tests-services-wt9fv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ecf820>: {
        s: "Namespace e2e-tests-services-wt9fv is active",
    }
    Namespace e2e-tests-services-wt9fv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42134ed40>: {
        s: "Namespace e2e-tests-services-wt9fv is active",
    }
    Namespace e2e-tests-services-wt9fv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc420747130>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 54, 75],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.54.75:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4214c3190>: {
        s: "Namespace e2e-tests-services-wt9fv is active",
    }
    Namespace e2e-tests-services-wt9fv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f229e0>: {
        s: "Namespace e2e-tests-services-wt9fv is active",
    }
    Namespace e2e-tests-services-wt9fv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4208baf10>: {
        s: "Namespace e2e-tests-services-wt9fv is active",
    }
    Namespace e2e-tests-services-wt9fv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420d13990>: {
        s: "Namespace e2e-tests-services-wt9fv is active",
    }
    Namespace e2e-tests-services-wt9fv is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/124/ Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42193bd10>: {
        s: "Namespace e2e-tests-services-w377v is active",
    }
    Namespace e2e-tests-services-w377v is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421823da0>: {
        s: "Namespace e2e-tests-services-w377v is active",
    }
    Namespace e2e-tests-services-w377v is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c52f40>: {
        s: "Namespace e2e-tests-services-w377v is active",
    }
    Namespace e2e-tests-services-w377v is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212b5bb0>: {
        s: "Namespace e2e-tests-services-w377v is active",
    }
    Namespace e2e-tests-services-w377v is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f5cec0>: {
        s: "Namespace e2e-tests-services-w377v is active",
    }
    Namespace e2e-tests-services-w377v is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422c48fa0>: {
        s: "Namespace e2e-tests-services-w377v is active",
    }
    Namespace e2e-tests-services-w377v is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc421927d70>: {
        s: "error while stopping RC: service1: Get https://104.198.32.26/api/v1/namespaces/e2e-tests-services-w377v/replicationcontrollers/service1: read tcp 172.17.0.3:50100->104.198.32.26:443: read: connection reset by peer",
    }
    error while stopping RC: service1: Get https://104.198.32.26/api/v1/namespaces/e2e-tests-services-w377v/replicationcontrollers/service1: read tcp 172.17.0.3:50100->104.198.32.26:443: read: connection reset by peer
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:417

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421be1bc0>: {
        s: "Namespace e2e-tests-services-w377v is active",
    }
    Namespace e2e-tests-services-w377v is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a41d00>: {
        s: "Namespace e2e-tests-services-w377v is active",
    }
    Namespace e2e-tests-services-w377v is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42173ec40>: {
        s: "Namespace e2e-tests-services-w377v is active",
    }
    Namespace e2e-tests-services-w377v is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4217b4630>: {
        s: "Namespace e2e-tests-services-w377v is active",
    }
    Namespace e2e-tests-services-w377v is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220b7250>: {
        s: "Namespace e2e-tests-services-w377v is active",
    }
    Namespace e2e-tests-services-w377v is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c53540>: {
        s: "Namespace e2e-tests-services-w377v is active",
    }
    Namespace e2e-tests-services-w377v is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220b6620>: {
        s: "Namespace e2e-tests-services-w377v is active",
    }
    Namespace e2e-tests-services-w377v is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/133/ Multiple broken tests:

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 00:27:37.609: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215378f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31938

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 00:11:28.444: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42201f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37373

Failed: [k8s.io] HostPath should support subPath {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 00:57:37.286: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225f38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 01:27:02.918: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4201218f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34226

Failed: [k8s.io] Secrets should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 00:48:02.173: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4226ed8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35256

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 00:30:59.909: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4227144f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32023

Failed: [k8s.io] InitContainer should not start app containers if init containers fail on a RestartAlways pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 00:34:52.179: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42290cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31408

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 01:30:13.288: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216044f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc4203accf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34104

Failed: [k8s.io] Downward API volume should provide podname only [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 00:51:09.390: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4220038f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31836

Failed: [k8s.io] PrivilegedPod should test privileged pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 02:15:05.858: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4226264f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29519 #32451

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
Expected error:
    <*errors.errorString | 0xc4203accf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34317

Failed: list nodes {e2e.go}

exit status 1

Issues about this test specifically: #38667

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Expected error:
    <*errors.errorString | 0xc4203accf0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34250

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 02:08:24.995: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42201f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37479

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 01:36:36.424: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e418f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 00:38:23.441: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215898f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32053 #32758

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 02:54:47.219: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c3cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31936

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 01:23:50.711: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217f38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38254

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 01:19:10.159: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f1b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28003

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 01:39:46.705: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225f38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31575 #32756

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 03:39:45.700: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b358f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36649

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 01:00:50.772: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421f944f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26728 #28266 #30340 #32405

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 01:15:40.882: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225678f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] EmptyDir volumes volume on tmpfs should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 03:01:26.670: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213564f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33987

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 02:34:56.656: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4226ecef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 02:51:34.377: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42256b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28503

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422641180>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  }]\nkube-dns-4101612645-r3vtj                                          gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:39:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:42 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:39:54 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-66dcc62c-tjb9            gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 17:52:29 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 18:07:54 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  }]
    kube-dns-4101612645-r3vtj                                          gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:39:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:42 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:39:54 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-66dcc62c-tjb9            gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 17:52:29 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 18:07:54 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 02:28:05.054: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c2d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26134

Failed: [k8s.io] DisruptionController evictions: enough pods, absolute => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 02:57:59.517: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4225deef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32753 #34676

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 00:21:11.310: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42182b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37529

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42141af60>: {
        s: "3 / 12 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  }]\nkube-dns-4101612645-r3vtj                                          gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:39:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:42 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:39:54 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-66dcc62c-tjb9            gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 17:52:29 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 18:07:54 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  }]\n",
    }
    3 / 12 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  }]
    kube-dns-4101612645-r3vtj                                          gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:39:54 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:42 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 19:39:54 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-66dcc62c-tjb9            gke-bootstrap-e2e-default-pool-66dcc62c-tjb9 Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 17:52:29 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-05 18:07:54 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-05 23:52:36 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 00:41:39.706: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215378f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 00:44:49.967: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216d6ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28420 #36122

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 03:27:18.968: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422002ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32087

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc421de94f0>: {
        s: "at least one node failed to be ready",
    }
    at least one node failed to be ready
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:80

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 00:17:59.040: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a4d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38516

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 03:42:57.910: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4218fe4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29467

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 03:48:31.233: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ab2ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37314

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 02:18:50.030: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a4d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28337

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 02:31:32.444: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b038f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 00:54:23.011: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc422830ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 00:14:40.796: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ff38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 01:11:13.658: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214298f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26324 #27715 #28845

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 02:11:52.741: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4227ef8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan  6 01:04:01.113: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217418f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/134/ Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42166e970>: {
        s: "Namespace e2e-tests-services-rc03g is active",
    }
    Namespace e2e-tests-services-rc03g is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42300bf30>: {
        s: "Namespace e2e-tests-services-rc03g is active",
    }
    Namespace e2e-tests-services-rc03g is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221ac530>: {
        s: "Namespace e2e-tests-services-rc03g is active",
    }
    Namespace e2e-tests-services-rc03g is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc422359950>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 58, 18],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.58.18:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42229ac60>: {
        s: "Namespace e2e-tests-services-rc03g is active",
    }
    Namespace e2e-tests-services-rc03g is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422c58750>: {
        s: "Namespace e2e-tests-services-rc03g is active",
    }
    Namespace e2e-tests-services-rc03g is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422333c70>: {
        s: "Namespace e2e-tests-services-rc03g is active",
    }
    Namespace e2e-tests-services-rc03g is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422ff8430>: {
        s: "Namespace e2e-tests-services-rc03g is active",
    }
    Namespace e2e-tests-services-rc03g is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421bfc580>: {
        s: "Namespace e2e-tests-services-rc03g is active",
    }
    Namespace e2e-tests-services-rc03g is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421bb0f80>: {
        s: "Namespace e2e-tests-services-rc03g is active",
    }
    Namespace e2e-tests-services-rc03g is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4219e8a70>: {
        s: "Namespace e2e-tests-services-rc03g is active",
    }
    Namespace e2e-tests-services-rc03g is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421534cd0>: {
        s: "Namespace e2e-tests-services-rc03g is active",
    }
    Namespace e2e-tests-services-rc03g is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/140/ Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e37ac0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212522c0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:41:14 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:41:14 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421226da0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:41:14 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:41:14 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422104e80>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212524d0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421756f00>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421733000>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218810d0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421038b40>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:41:14 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:41:14 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421639a70>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e59950>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc4213229a0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:41:14 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:41:14 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42102d1c0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a67830>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc4212ea6f0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4212d4730>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220c6130>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4223e12d0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Jan  8 03:26:19.217: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929 #38552

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f3dd80>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-16fd072f-x354 gke-bootstrap-e2e-default-pool-16fd072f-x354 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:24:44 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-08 02:39:54 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-08 04:26:43 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/144/ Multiple broken tests:

Failed: list nodes {e2e.go}

exit status 1

Issues about this test specifically: #38667

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: kubectl version {e2e.go}

exit status 1

Issues about this test specifically: #34378

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/148/ Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Expected error:
    <*url.Error | 0xc4223f1470>: {
        Op: "Get",
        URL: "https://104.198.189.91/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-2t20v/replicationcontrollers/rc",
        Err: {StreamID: 3907, Code: 2},
    }
    Get https://104.198.189.91/api/v1/namespaces/e2e-tests-horizontal-pod-autoscaling-2t20v/replicationcontrollers/rc: stream error: stream ID 3907; INTERNAL_ERROR
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:250

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141
Expected error:
    <*errors.errorString | 0xc4203aac90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:229

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4221adf60>: {
        s: "Namespace e2e-tests-horizontal-pod-autoscaling-2t20v is active",
    }
    Namespace e2e-tests-horizontal-pod-autoscaling-2t20v is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/149/ Multiple broken tests:

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Jan 10 17:17:46.466: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2562

Issues about this test specifically: #38172

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421382e30>: {
        s: "Namespace e2e-tests-replicaset-xckl7 is active",
    }
    Namespace e2e-tests-replicaset-xckl7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #33883

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421fdec40>: {
        s: "Namespace e2e-tests-replicaset-xckl7 is active",
    }
    Namespace e2e-tests-replicaset-xckl7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28853 #31585

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422038a70>: {
        s: "Namespace e2e-tests-replicaset-xckl7 is active",
    }
    Namespace e2e-tests-replicaset-xckl7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ec04d0>: {
        s: "Namespace e2e-tests-replicaset-xckl7 is active",
    }
    Namespace e2e-tests-replicaset-xckl7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29516

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Expected error:
    <*url.Error | 0xc421caab70>: {
        Op: "Get",
        URL: "https://104.154.207.205/api/v1/namespaces/e2e-tests-replicaset-xckl7/pods?labelSelector=name%3Dmy-hostname-private-8a10690b-d7bd-11e6-9eb8-0242ac11000a",
        Err: {
            Op: "dial",
            Net: "tcp",
            Source: nil,
            Addr: {
                IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 104, 154, 207, 205],
                Port: 443,
                Zone: "",
            },
            Err: {
                Syscall: "getsockopt",
                Err: 0x6f,
            },
        },
    }
    Get https://104.154.207.205/api/v1/namespaces/e2e-tests-replicaset-xckl7/pods?labelSelector=name%3Dmy-hostname-private-8a10690b-d7bd-11e6-9eb8-0242ac11000a: dial tcp 104.154.207.205:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:143

Issues about this test specifically: #32023

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421b93900>: {
        s: "Namespace e2e-tests-replicaset-xckl7 is active",
    }
    Namespace e2e-tests-replicaset-xckl7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422760e90>: {
        s: "Namespace e2e-tests-replicaset-xckl7 is active",
    }
    Namespace e2e-tests-replicaset-xckl7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421c4d5d0>: {
        s: "Namespace e2e-tests-replicaset-xckl7 is active",
    }
    Namespace e2e-tests-replicaset-xckl7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421e44420>: {
        s: "Namespace e2e-tests-replicaset-xckl7 is active",
    }
    Namespace e2e-tests-replicaset-xckl7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42132d0d0>: {
        s: "Namespace e2e-tests-replicaset-xckl7 is active",
    }
    Namespace e2e-tests-replicaset-xckl7 is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78
k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/152/ Multiple broken tests:

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:64
Expected error:
    <*errors.errorString | 0xc421e7cb80>: {
        s: "expected pod \"client-containers-b59d155a-d877-11e6-bf81-0242ac110004\" success: gave up waiting for pod 'client-containers-b59d155a-d877-11e6-bf81-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-b59d155a-d877-11e6-bf81-0242ac110004" success: gave up waiting for pod 'client-containers-b59d155a-d877-11e6-bf81-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29467

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:34
Expected error:
    <*errors.errorString | 0xc421e399e0>: {
        s: "expected pod \"client-containers-28dd1035-d884-11e6-bf81-0242ac110004\" success: gave up waiting for pod 'client-containers-28dd1035-d884-11e6-bf81-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-28dd1035-d884-11e6-bf81-0242ac110004" success: gave up waiting for pod 'client-containers-28dd1035-d884-11e6-bf81-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34520

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:54
Expected error:
    <*errors.errorString | 0xc421f9cfa0>: {
        s: "expected pod \"client-containers-e4f19ad9-d888-11e6-bf81-0242ac110004\" success: gave up waiting for pod 'client-containers-e4f19ad9-d888-11e6-bf81-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-e4f19ad9-d888-11e6-bf81-0242ac110004" success: gave up waiting for pod 'client-containers-e4f19ad9-d888-11e6-bf81-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29994

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/154/ Multiple broken tests:

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420fc8f60>: {
        s: "Namespace e2e-tests-services-swf5j is active",
    }
    Namespace e2e-tests-services-swf5j is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a55f00>: {
        s: "Namespace e2e-tests-services-swf5j is active",
    }
    Namespace e2e-tests-services-swf5j is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #28071

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Jan 12 08:38:48.536: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2562

Issues about this test specifically: #38172

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*net.OpError | 0xc42291b540>: {
        Op: "dial",
        Net: "tcp",
        Source: nil,
        Addr: {
            IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 184, 37, 140],
            Port: 443,
            Zone: "",
        },
        Err: {
            Syscall: "getsockopt",
            Err: 0x6f,
        },
    }
    dial tcp 35.184.37.140:443: getsockopt: connection refused
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:442

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42206ca80>: {
        s: "Namespace e2e-tests-services-swf5j is active",
    }
    Namespace e2e-tests-services-swf5j is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/171/ Multiple broken tests:

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:71
Expected error:
    <*errors.errorString | 0xc421f6e030>: {
        s: "failed to wait for pods running: [timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:423

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:234
starting pod liveness-http in namespace e2e-tests-container-probe-ddpn6
Expected error:
    <*errors.errorString | 0xc4203accd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:364

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Jan 18 03:17:03.555: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:310
Expected error:
    <*errors.errorString | 0xc4203accd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #35793

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:211
Jan 18 07:44:41.739: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38439

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Jan 18 00:32:02.191: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2562

Issues about this test specifically: #38172

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule pets if there is a network partition [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:432
Jan 18 05:30:31.381: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37774

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:270
Jan 18 06:38:46.982: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:351
Expected error:
    <*errors.errorString | 0xc4203accd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36649

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc42202c010>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #31151 #35586

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:347
Jan 18 07:18:05.414: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #38083

Failed: [k8s.io] Deployment iterative rollouts should eventually progress {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:104
Expected error:
    <*errors.errorString | 0xc421de4800>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:22, Replicas:5, UpdatedReplicas:3, AvailableReplicas:3, UnavailableReplicas:2, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:\"Available\", Status:\"True\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63620348037, nsec:0, loc:(*time.Location)(0x3cec280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63620348037, nsec:0, loc:(*time.Location)(0x3cec280)}}, Reason:\"MinimumReplicasAvailable\", Message:\"Deployment has minimum availability.\"}, extensions.DeploymentCondition{Type:\"Progressing\", Status:\"False\", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63620348081, nsec:0, loc:(*time.Location)(0x3cec280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63620348081, nsec:0, loc:(*time.Location)(0x3cec280)}}, Reason:\"ProgressDeadlineExceeded\", Message:\"Replica set \\\"nginx-76672336\\\" has timed out progressing.\"}}}",
    }
    error waiting for deployment "nginx" status to match expectation: deployment status: extensions.DeploymentStatus{ObservedGeneration:22, Replicas:5, UpdatedReplicas:3, AvailableReplicas:3, UnavailableReplicas:2, Conditions:[]extensions.DeploymentCondition{extensions.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63620348037, nsec:0, loc:(*time.Location)(0x3cec280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63620348037, nsec:0, loc:(*time.Location)(0x3cec280)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, extensions.DeploymentCondition{Type:"Progressing", Status:"False", LastUpdateTime:unversioned.Time{Time:time.Time{sec:63620348081, nsec:0, loc:(*time.Location)(0x3cec280)}}, LastTransitionTime:unversioned.Time{Time:time.Time{sec:63620348081, nsec:0, loc:(*time.Location)(0x3cec280)}}, Reason:"ProgressDeadlineExceeded", Message:"Replica set \"nginx-76672336\" has timed out progressing."}}}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1465

Issues about this test specifically: #36265 #36353 #36628

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support port-forward {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #28371 #29604 #37496

Failed: [k8s.io] Deployment deployment should label adopted RSs and pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:89
Expected error:
    <*errors.errorString | 0xc421a94000>: {
        s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
    }
    failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:953

Issues about this test specifically: #29629 #36270 #37462

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:188
Expected error:
    <*errors.errorString | 0xc4203accd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pods.go:101

Issues about this test specifically: #36564

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support inline execution and attach {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:377
Expected
    <bool>: false
to be true
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:376

Issues about this test specifically: #26324 #27715 #28845

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/172/ Multiple broken tests:

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc421efe690>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27233 #36204

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4227494a0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that Inter-pod-Affinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc422a6a510>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #31918

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421bd6790>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4210167a0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to add nodes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:291
Expected error:
    <*errors.errorString | 0xc420ad7670>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:56:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:56:35 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:288

Issues about this test specifically: #27470 #30156 #34304 #37620

Failed: [k8s.io] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420dcea40>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:56:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:56:35 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28071

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4210f8fa0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4216c35f0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28091 #38346

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420a91870>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:56:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:56:35 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421f29570>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421afcd60>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421a61160>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #36914

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220b06b0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #29516

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4210f8210>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Jan 18 19:50:22.548: At least one pod wasn't running and ready or succeeded at test start.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:93

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420dce290>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27662 #29820 #31971 #32505 #34221 #35106 #35110 #35121 #37509

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420972d10>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:56:35 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:56:35 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4228320b0>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28019

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4220a7760>: {
        s: "1 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]\n",
    }
    1 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 gke-bootstrap-e2e-default-pool-b8988bc1-rmx7 Pending       [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:07 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-18 10:54:49 -0800 PST ContainersNotReady containers with unready status: [fluentd-cloud-logging]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-18 12:14:26 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #28853 #31585

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/174/ Multiple broken tests:

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:352
Expected error:
    <*errors.errorString | 0xc4203cee90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc4203cee90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:265

Issues about this test specifically: #32584

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203cee90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:236

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:135
Expected error:
    <*errors.errorString | 0xc4203cee90>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:74
Expected error:
    <*errors.errorString | 0xc422b00400>: {
        s: "want pod 'test-webserver-da3be3e5-de72-11e6-bef7-0242ac11000b' on 'gke-bootstrap-e2e-default-pool-019f2cb6-xxs7' to be 'Running' but was 'Pending'",
    }
    want pod 'test-webserver-da3be3e5-de72-11e6-bef7-0242ac11000b' on 'gke-bootstrap-e2e-default-pool-019f2cb6-xxs7' to be 'Running' but was 'Pending'
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:56

Issues about this test specifically: #29521

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Expected error:
    <*errors.errorString | 0xc42324c320>: {
        s: "expected pod \"pod-751b47d4-de83-11e6-bef7-0242ac11000b\" success: gave up waiting for pod 'pod-751b47d4-de83-11e6-bef7-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-751b47d4-de83-11e6-bef7-0242ac11000b" success: gave up waiting for pod 'pod-751b47d4-de83-11e6-bef7-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #30851

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:324
Jan 19 05:26:18.189: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:334
Jan 19 11:25:15.649: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #26425 #26715 #28825 #28880 #32854

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Jan 19 03:57:17.303: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2562

Issues about this test specifically: #38172

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:109
Expected error:
    <*errors.errorString | 0xc422b00630>: {
        s: "expected pod \"pod-12936763-de6f-11e6-bef7-0242ac11000b\" success: gave up waiting for pod 'pod-12936763-de6f-11e6-bef7-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-12936763-de6f-11e6-bef7-0242ac11000b" success: gave up waiting for pod 'pod-12936763-de6f-11e6-bef7-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37071

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:113
Expected error:
    <*errors.errorString | 0xc4218f59a0>: {
        s: "expected pod \"pod-470dc4ad-de6e-11e6-bef7-0242ac11000b\" success: gave up waiting for pod 'pod-470dc4ad-de6e-11e6-bef7-0242ac11000b' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-470dc4ad-de6e-11e6-bef7-0242ac11000b" success: gave up waiting for pod 'pod-470dc4ad-de6e-11e6-bef7-0242ac11000b' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34226

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:310
Jan 19 08:44:05.875: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1985

Issues about this test specifically: #28565 #29072 #29390 #29659 #30072 #33941

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc421b89460>: {
        s: "error waiting for deployment \"nginx\" status to match expectation: total pods available: 17, less than the min required: 18",
    }
    error waiting for deployment "nginx" status to match expectation: total pods available: 17, less than the min required: 18
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1120

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/175/ Multiple broken tests:

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:113
Expected error:
    <*errors.errorString | 0xc421727e50>: {
        s: "expected pod \"pod-04de9418-dea9-11e6-8e4d-0242ac110004\" success: gave up waiting for pod 'pod-04de9418-dea9-11e6-8e4d-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-04de9418-dea9-11e6-8e4d-0242ac110004" success: gave up waiting for pod 'pod-04de9418-dea9-11e6-8e4d-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #34226

Failed: [k8s.io] EmptyDir volumes should support (non-root,0644,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:109
Expected error:
    <*errors.errorString | 0xc422539a80>: {
        s: "expected pod \"pod-c3be5fae-dea9-11e6-8e4d-0242ac110004\" success: gave up waiting for pod 'pod-c3be5fae-dea9-11e6-8e4d-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-c3be5fae-dea9-11e6-8e4d-0242ac110004" success: gave up waiting for pod 'pod-c3be5fae-dea9-11e6-8e4d-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #37071

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Jan 19 20:56:43.794: error restarting apiserver: error running gcloud [container clusters --project=gke-up-g1-3-g1-5-up-clu-n --zone=us-central1-a upgrade bootstrap-e2e --master --cluster-version=1.5.3-beta.0.9+f0c2af20c13ab2 --quiet]; got error exit status 1, stdout "", stderr "Upgrading bootstrap-e2e...\n....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.\nERROR: (gcloud.container.clusters.upgrade) Operation [<Operation\n name: u'operation-1484884274485-60691918'\n operationType: OperationTypeValueValuesEnum(UPGRADE_MASTER, 3)\n selfLink: u'https://test-container.sandbox.googleapis.com/v1/projects/61807208001/zones/us-central1-a/operations/operation-1484884274485-60691918'\n status: StatusValueValuesEnum(DONE, 3)\n statusMessage: u'Timed out waiting for cluster initialization. Cluster API may not be available.'\n targetLink: u'https://test-container.sandbox.googleapis.com/v1/projects/61807208001/zones/us-central1-a/clusters/bootstrap-e2e'\n zone: u'us-central1-a'>] finished with error: Timed out waiting for cluster initialization. Cluster API may not be available.\n"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:433

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:89
Expected error:
    <*errors.errorString | 0xc4216eb1b0>: {
        s: "expected pod \"pod-2439a5a6-de91-11e6-8e4d-0242ac110004\" success: gave up waiting for pod 'pod-2439a5a6-de91-11e6-8e4d-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-2439a5a6-de91-11e6-8e4d-0242ac110004" success: gave up waiting for pod 'pod-2439a5a6-de91-11e6-8e4d-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #30851

Failed: [k8s.io] EmptyDir volumes should support (non-root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:117
Expected error:
    <*errors.errorString | 0xc421f82060>: {
        s: "expected pod \"pod-971ef5f4-dea7-11e6-8e4d-0242ac110004\" success: gave up waiting for pod 'pod-971ef5f4-dea7-11e6-8e4d-0242ac110004' to be 'success or failure' after 5m0s",
    }
    expected pod "pod-971ef5f4-dea7-11e6-8e4d-0242ac110004" success: gave up waiting for pod 'pod-971ef5f4-dea7-11e6-8e4d-0242ac110004' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167
k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/197/ Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 23:25:25.336: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42170ec78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27479 #27675 #28097 #32950 #34301 #37082

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 21:26:46.283: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421e58278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 21:43:09.390: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b3cc78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should create a quota with scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 23:28:35.907: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421b26278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420b2a370>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #27655 #33876

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 23:31:46.918: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42155a278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33008

Failed: [k8s.io] DNS config map should be able to change configuration {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 21:29:58.645: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ad2c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37144

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 21 01:56:46.422: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421aa2278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35473

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 21:33:14.864: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420695678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 21:20:06.913: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421bc4278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 21:16:37.679: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421733678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Pods should allow activeDeadlineSeconds to be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 21:53:02.841: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c18c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36649

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 21 02:00:06.814: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420695678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28337

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Jan 20 20:45:34.822: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should remove all the taints with the same key off a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 23:03:37.466: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a39678)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31066 #31967 #32219 #32535

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPod Affinity and AntiAffinity is respected if matching {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ab4210>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl taint should update the taint on a node {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 21:36:26.960: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d46c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27976 #29503

Failed: [k8s.io] Secrets should be consumable in multiple volumes in a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 21:23:34.091: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421174c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 21:39:37.159: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a38c78)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27503

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+gke-bootstrap-e2e-0742-pvc-30ac61e6-df92-11e6-8ecc-42010af00015  us-central1-a  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:105
Expected error:
    <*errors.errorString | 0xc420352d50>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34317

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc421ab4ce0>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Failed: [k8s.io] Downward API should provide pod name and namespace as env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 20 21:49:46.742: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421c06278)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438
k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/198/ Multiple broken tests:

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc4219fc4f0>: {
        s: "pod \"wget-test\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-21 03:49:54 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-21 03:50:25 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-21 03:49:54 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.5 PodIP:10.96.4.25 StartTime:2017-01-21 03:49:54 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc4200143f0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://38f1f132f5ca616bbdafbbf18918a173f816bdef76765a6946504f1703dfe34b}]}",
    }
    pod "wget-test" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-21 03:49:54 -0800 PST Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-21 03:50:25 -0800 PST Reason:ContainersNotReady Message:containers with unready status: [wget-test-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2017-01-21 03:49:54 -0800 PST Reason: Message:}] Message: Reason: HostIP:10.240.0.5 PodIP:10.96.4.25 StartTime:2017-01-21 03:49:54 -0800 PST InitContainerStatuses:[] ContainerStatuses:[{Name:wget-test-container State:{Waiting:<nil> Running:<nil> Terminated:0xc4200143f0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:gcr.io/google_containers/busybox:1.24 ImageID:docker://sha256:0cb40641836c461bc97c793971d84d758371ed682042457523e4ae701efe7ec9 ContainerID:docker://38f1f132f5ca616bbdafbbf18918a173f816bdef76765a6946504f1703dfe34b}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Jan 21 08:24:05.668: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 104.154.146.42 31224
retrieved map[]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33285

Failed: [k8s.io] Proxy version v1 should proxy through a service and a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:272
0 (0; 2m7.306539187s): path /api/v1/namespaces/e2e-tests-proxy-58n5h/pods/proxy-service-5pd8r-z13sn/proxy/ gave status error: {TypeMeta:{Kind: APIVersion:} ListMeta:{SelfLink: ResourceVersion:} Status:Failure Message:an error on the server ("Error: 'ssh: rejected: connect failed (Connection timed out)'\nTrying to reach: 'http://10.96.4.216:80/'") has prevented the request from succeeding Reason:InternalError Details:&StatusDetails{Name:,Group:,Kind:,Causes:[{UnexpectedServerResponse Error: 'ssh: rejected: connect failed (Connection timed out)'
Trying to reach: 'http://10.96.4.216:80/' }],RetryAfterSeconds:0,} Code:503}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/proxy.go:270

Issues about this test specifically: #26164 #26210 #33998 #37158

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:77
Expected error:
    <*errors.errorString | 0xc4219f4970>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:547

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: TearDown {e2e.go}

signal: interrupt

Issues about this test specifically: #34118 #34795 #37058 #38207

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Each pod should start running and responding
Expected error:
    <*errors.errorString | 0xc422670750>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:261

Issues about this test specifically: #37259

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Jan 21 08:32:00.722: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.99.252.113:80/hostName
retrieved map[netserver-2:{} netserver-0:{}]
expected map[netserver-2:{} netserver-0:{} netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:366
Jan 21 03:45:40.838: Cannot added new entry in 180 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1585

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc4218d2150>: {
        s: "service verification failed for: 10.99.248.199\nexpected [service1-cl9kq service1-ppxq9 service1-q8qv0]\nreceived [service1-ppxq9 service1-q8qv0 wget: download timed out]",
    }
    service verification failed for: 10.99.248.199
    expected [service1-cl9kq service1-ppxq9 service1-q8qv0]
    received [service1-ppxq9 service1-q8qv0 wget: download timed out]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:387

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Jan 21 08:20:19.829: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.4.23:8080/dial?request=hostName&protocol=udp&host=10.99.242.160&port=90&tries=1'
retrieved map[netserver-0:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #34250

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:49
Jan 21 08:48:06.722: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: http [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:38
Jan 21 05:27:04.543: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.4.52:8080/dial?request=hostName&protocol=http&host=10.96.4.51&port=8080&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32375

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:73
Jan 21 07:49:43.185: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #28657 #30519 #33878

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Jan 21 06:44:28.911: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.4.205:8080/dial?request=hostName&protocol=udp&host=10.96.3.174&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Services should be able to create a functioning NodePort service {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:485
Jan 21 07:55:40.923: Could not reach HTTP service through 104.154.146.42:30623 after 5m0s: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2530

Issues about this test specifically: #28064 #28569 #34036

Failed: [k8s.io] DNS should provide DNS for ExternalName services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:501
Expected error:
    <*errors.errorString | 0xc42038cbc0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #32584

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/208/ Multiple broken tests:

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a secret. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 13:08:26.256: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f96ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32053 #32758

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c7a350>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b864de9e-rl5r gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:56:02 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b864de9e-rl5r            gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b864de9e-rl5r gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:56:02 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b864de9e-rl5r            gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #34223

Failed: [k8s.io] Pods should get a host IP [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 18:12:13.824: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219cf8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33008

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 14:11:38.738: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216f8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a pod. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 15:16:16.375: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4217944f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38516

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 14:43:57.589: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42132a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37479

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 19:49:22.976: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fe8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27232

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 11:42:09.111: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e48ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31938

Failed: [k8s.io] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:10:32.778: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215f24f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35422

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 14:28:29.759: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a50ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 15:35:57.154: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42149cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26870 #36429

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 11:09:50.018: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b80ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 15:06:24.541: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42168e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27014 #27834

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 18:09:02.747: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210f0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects a client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 15:49:00.546: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a504f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26955

Failed: [k8s.io] ConfigMap should be consumable from pods in volume with defaultMode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 13:56:51.515: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420beb8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34827

Failed: [k8s.io] DNS should provide DNS for the cluster [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 17:05:42.762: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e884f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26194 #26338 #30345 #34571

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 13:53:31.254: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210024f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] Downward API volume should update annotations on modification [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 16:28:01.816: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420eb0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28462 #33782 #34014 #37374

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 15:23:36.988: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d324f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Expected error:
    <*errors.errorString | 0xc4203ab560>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner Alpha should create and delete alpha persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:00:41.697: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42184c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Docker Containers should use the image defaults if command and args are blank [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 12:02:17.149: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b9aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34520

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 14:59:30.312: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420dec4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36950

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should return command exit codes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 14:24:58.584: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c6aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31151 #35586

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 18:50:23.143: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421370ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28337

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 15:39:09.353: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213e38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc4203ab560>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36178

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 15:55:07.898: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4202778f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 14:04:39.851: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c4aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36794

Failed: [k8s.io] Services should provide secure master service [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:23:24.020: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213504f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid NodeAffinity is rejected {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420db2cb0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b864de9e-rl5r gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:56:02 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b864de9e-rl5r            gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b864de9e-rl5r gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:56:02 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b864de9e-rl5r            gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #33883

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Expected error:
    <*errors.errorString | 0xc4203ab560>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32830

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 19:06:56.888: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4207144f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27397 #27917 #31592

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420947330>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b864de9e-rl5r gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:56:02 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b864de9e-rl5r            gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b864de9e-rl5r gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:56:02 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b864de9e-rl5r            gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Failed: [k8s.io] Generated release_1_5 clientset should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 11:26:04.666: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420cceef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl apply should reuse port when apply to an existing SVC {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 13:14:48.346: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c204f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36948

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 14:07:52.137: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f3c4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment deployment should support rollover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 14:00:42.824: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a538f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26509 #26834 #29780 #35355 #38275 #39879

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 19:10:24.089: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215c8ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Port forwarding [k8s.io] With a server that expects no client request should support a client that connects, sends no data, and disconnects [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 16:11:44.975: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f6eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27673

Failed: [k8s.io] Deployment deployment should support rollback {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 19:27:06.786: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4213518f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28348 #36703

Failed: [k8s.io] NodeProblemDetector [k8s.io] KernelMonitor should generate node condition and events for corresponding errors {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 19:53:58.315: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216b58f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28069 #28168 #28343 #29656 #33183 #38145

Failed: [k8s.io] MetricsGrabber should grab all metrics from API server. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 12:15:41.113: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4202778f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29513

Failed: [k8s.io] Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 17:41:25.373: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210e2ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37428 #40256

Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 17:18:25.648: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215158f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:03:57.409: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fe38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should prevent NodePort collisions {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 13:11:36.455: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206e18f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31575 #32756

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 16:18:20.194: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b584f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor using proxy subresource {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 11:32:28.580: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a3e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37435

Failed: [k8s.io] Deployment deployment should create new pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:20:13.854: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42137cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35579

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Expected error:
    <*errors.errorString | 0xc4203ab560>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33285

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 15:12:58.128: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42042d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26138 #28429 #28737 #38064

Failed: [k8s.io] Proxy version v1 should proxy logs on node [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 16:21:30.442: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42146e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36242

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 16:45:36.752: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210924f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37502

Failed: [k8s.io] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 16:49:18.955: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fe38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 14:21:28.291: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210724f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 11:16:25.032: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b3cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Scheduler. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 11:38:54.973: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b9b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:124
Expected error:
    <*errors.errorString | 0xc4211aefb0>: {
        s: "error waiting for node gke-bootstrap-e2e-default-pool-b864de9e-rl5r boot ID to change: timed out waiting for the condition",
    }
    error waiting for node gke-bootstrap-e2e-default-pool-b864de9e-rl5r boot ID to change: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/restart.go:98

Issues about this test specifically: #26744 #26929 #38552

Failed: [k8s.io] Job should delete a job {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 19:34:21.657: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210d2ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28003

Failed: [k8s.io] Pods should be updated [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 18:56:45.311: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f6eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35793

Failed: [k8s.io] Downward API volume should provide container's cpu request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 11:22:47.452: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206918f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 19:46:01.764: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fe38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26175 #26846 #27334 #28293 #29149 #31884 #33672 #34774

Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 16:05:00.593: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42137eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31277 #31347 #31710 #32260 #32531

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 13:31:10.993: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4202778f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36706

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 12:05:33.054: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420da18f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36564

Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 16:08:31.904: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e70ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31502 #32947 #38646

Failed: DiffResources {e2e.go}

Error: 5 leaked resources
[ addresses ]
+NAME                              REGION       ADDRESS          STATUS
+ac2547d62e2b711e6912542010af0001  us-central1  104.155.130.251  RESERVED
[ firewall-rules ]
+k8s-fw-ac252910fe2b711e6912542010af0001  bootstrap-e2e  0.0.0.0/0     tcp:80                                  gke-bootstrap-e2e-dda1c202-node
[ forwarding-rules ]
+ac252910fe2b711e6912542010af0001  us-central1  104.197.70.203   TCP          us-central1/targetPools/ac252910fe2b711e6912542010af0001
[ target-pools ]
+ac252910fe2b711e6912542010af0001  us-central1

Issues about this test specifically: #33373 #33416 #34060

Failed: [k8s.io] SchedulerPredicates [Serial] validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42095f160>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b864de9e-rl5r gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:56:02 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b864de9e-rl5r            gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b864de9e-rl5r gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:56:02 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b864de9e-rl5r            gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #35279

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Expected error:
    <*errors.errorString | 0xc4203ab560>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33887

Failed: [k8s.io] Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully. [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 16:00:22.621: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214944f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28297 #37101 #38201

Failed: [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 15:20:19.421: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42170eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30264

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 18:34:00.328: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216e84f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37259

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should allow template updates {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 12:20:31.351: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4211c0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38439

Failed: [k8s.io] EmptyDir volumes should support (non-root,0666,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 12:36:48.160: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42042d8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34226

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 18:53:33.657: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216b58f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 17:25:04.752: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210e0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 16:15:06.270: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fe38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 12:33:20.897: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4202204f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 14:18:16.139: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420f3b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ConfigMap updates should be reflected in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 11:29:18.331: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b578f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30352 #38166

Failed: [k8s.io] Deployment deployment should delete old replica sets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 16:24:47.642: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210bf8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28339 #36379

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 11:58:49.933: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420da84f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30317 #31591 #37163

Failed: [k8s.io] SSH should SSH to all nodes and run commands {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 17:51:51.870: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fe38f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26129 #32341

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 12:12:30.925: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210418f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 17:21:46.561: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4210104f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Probing container should not be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 16:33:12.931: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a50ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30342 #31350

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 19:38:22.351: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214c6ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Expected error:
    <*errors.errorString | 0xc4203ab560>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34064

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a public image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 11:45:33.493: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420fcaef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #30981

Failed: [k8s.io] Cadvisor should be healthy on every node. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:36
Jan 24 17:45:35.974: Failed after retrying 0 times for cadvisor to be healthy on all nodes. Errors:
[an error on the server ("Error: 'ssh: rejected: connect failed (Connection refused)'\nTrying to reach: 'https://gke-bootstrap-e2e-default-pool-b864de9e-rl5r:10250/stats/?timeout=5m0s'") has prevented the request from succeeding]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cadvisor.go:86

Issues about this test specifically: #32371

Failed: [k8s.io] ConfigMap should be consumable from pods in volume as non-root [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 11:13:02.201: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421028ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27245

Failed: [k8s.io] Staging client repo client should create pods, delete pods, watch pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 13:34:23.230: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206e0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31183 #36182

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 11:19:35.313: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e944f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27195

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] ServiceAccounts should mount an API token into pods [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 20:13:49.591: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42146eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37526

Failed: [k8s.io] Secrets should be consumable from pods in env vars [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 13:50:13.100: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42175aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32025 #36823

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 15:26:53.195: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a50ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34372

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 19:31:07.389: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206364f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAntiAffinity is respected if matching 2 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc42013da40>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b864de9e-rl5r gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:56:02 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b864de9e-rl5r            gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b864de9e-rl5r gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:56:02 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b864de9e-rl5r            gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #30078 #30142

Failed: [k8s.io] DisruptionController evictions: no PDB => should allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 13:47:00.852: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4208b78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32646

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 12:51:44.725: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206cf8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc420c26ab0>: {
        s: "2 / 11 pods in namespace \"kube-system\" are NOT in RUNNING and READY state in 5m0s\nPOD                                                                NODE                                         PHASE   GRACE CONDITIONS\nfluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b864de9e-rl5r gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:56:02 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]\nkube-proxy-gke-bootstrap-e2e-default-pool-b864de9e-rl5r            gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]\n",
    }
    2 / 11 pods in namespace "kube-system" are NOT in RUNNING and READY state in 5m0s
    POD                                                                NODE                                         PHASE   GRACE CONDITIONS
    fluentd-cloud-logging-gke-bootstrap-e2e-default-pool-b864de9e-rl5r gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:56:02 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]
    kube-proxy-gke-bootstrap-e2e-default-pool-b864de9e-rl5r            gke-bootstrap-e2e-default-pool-b864de9e-rl5r Running 30s   [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:13 -0800 PST  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-01-24 10:55:12 -0800 PST  }]

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:93

Issues about this test specifically: #27115 #28070 #30747 #31341 #35513 #37187 #38340

Failed: [k8s.io] Deployment paused deployment should be ignored by the controller {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Jan 24 11:35:44.825: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b4e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28067 #28378 #3269

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/209/ Multiple broken tests:

Failed: [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:43
Expected error:
    <*errors.errorString | 0xc4238f7d80>: {
        s: "expected pod \"client-containers-b3aaf6bf-e2e9-11e6-be18-0242ac110006\" success: gave up waiting for pod 'client-containers-b3aaf6bf-e2e9-11e6-be18-0242ac110006' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-b3aaf6bf-e2e9-11e6-be18-0242ac110006" success: gave up waiting for pod 'client-containers-b3aaf6bf-e2e9-11e6-be18-0242ac110006' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #36706

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Docker Containers should be able to override the image's default command and arguments [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:64
Expected error:
    <*errors.errorString | 0xc4218af4c0>: {
        s: "expected pod \"client-containers-c720d950-e2d9-11e6-be18-0242ac110006\" success: gave up waiting for pod 'client-containers-c720d950-e2d9-11e6-be18-0242ac110006' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-c720d950-e2d9-11e6-be18-0242ac110006" success: gave up waiting for pod 'client-containers-c720d950-e2d9-11e6-be18-0242ac110006' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29467

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:246
Jan 25 00:44:39.072: Unexpected error: <nil>
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:4390

Issues about this test specifically: #36794

Failed: [k8s.io] Docker Containers should be able to override the image's default commmand (docker entrypoint) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/docker_containers.go:54
Expected error:
    <*errors.errorString | 0xc4238b1a90>: {
        s: "expected pod \"client-containers-d5410ea7-e2e6-11e6-be18-0242ac110006\" success: gave up waiting for pod 'client-containers-d5410ea7-e2e6-11e6-be18-0242ac110006' to be 'success or failure' after 5m0s",
    }
    expected pod "client-containers-d5410ea7-e2e6-11e6-be18-0242ac110006" success: gave up waiting for pod 'client-containers-d5410ea7-e2e6-11e6-be18-0242ac110006' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167

Issues about this test specifically: #29994

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/212/ Multiple broken tests:

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:100
Jan 26 04:06:01.634: timeout waiting 15m0s for pods size to be 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:163
Jan 26 00:22:02.553: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.3.46:8080/dial?request=hostName&protocol=udp&host=10.99.255.245&port=90&tries=1'
retrieved map[]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #34250

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:62
Jan 26 03:35:34.770: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27394 #27660 #28079 #28768 #35871

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: udp [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:187
Jan 26 00:35:53.853: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 146.148.35.219 30200
retrieved map[netserver-1:{} netserver-0:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33285

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc421243fe0>: {
        s: "service verification failed for: 10.99.243.56\nexpected [service1-05ql1 service1-1q9ks service1-51bdx]\nreceived [service1-1q9ks service1-51bdx]",
    }
    service verification failed for: 10.99.243.56
    expected [service1-05ql1 service1-1q9ks service1-51bdx]
    received [service1-1q9ks service1-51bdx]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:428

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] ReplicaSet should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:88
Jan 25 21:14:24.913: Did not get expected responses within the timeout period of 120.00 seconds.
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/replica_set.go:163

Issues about this test specifically: #32023

Failed: [k8s.io] Services should preserve source pod IP for traffic thru service cluster IP {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:304
Expected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running &{/workspace/kubernetes_skew/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.3.99 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-f3hl6 execpod-sourceip-gke-bootstrap-e2e-default-pool-01beda2a-vf6bc2 -- /bin/sh -c wget -T 30 -qO- 10.99.243.238:8080 | grep client_address] []  <nil>  wget: download timed out\n [] <nil> 0xc4217404e0 exit status 1 <nil> <nil> true [0xc422586008 0xc422586038 0xc422586068] [0xc422586008 0xc422586038 0xc422586068] [0xc422586020 0xc422586060] [0x9728b0 0x9728b0] 0xc422274600 <nil>}:\nCommand stdout:\n\nstderr:\nwget: download timed out\n\nerror:\nexit status 1\n",
        },
        Code: 1,
    }
    error running &{/workspace/kubernetes_skew/platforms/linux/amd64/kubectl [kubectl --server=https://35.184.3.99 --kubeconfig=/workspace/.kube/config exec --namespace=e2e-tests-services-f3hl6 execpod-sourceip-gke-bootstrap-e2e-default-pool-01beda2a-vf6bc2 -- /bin/sh -c wget -T 30 -qO- 10.99.243.238:8080 | grep client_address] []  <nil>  wget: download timed out
     [] <nil> 0xc4217404e0 exit status 1 <nil> <nil> true [0xc422586008 0xc422586038 0xc422586068] [0xc422586008 0xc422586038 0xc422586068] [0xc422586020 0xc422586060] [0x9728b0 0x9728b0] 0xc422274600 <nil>}:
    Command stdout:

    stderr:
    wget: download timed out

    error:
    exit status 1

not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:1066

Issues about this test specifically: #31085 #34207 #37097

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:141
Jan 25 21:43:47.747: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.0.132:8080/dial?request=hostName&protocol=udp&host=10.99.249.102&port=90&tries=1'
retrieved map[netserver-1:{} netserver-0:{}]
expected map[netserver-1:{} netserver-2:{} netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #34064

Failed: [k8s.io] Deployment scaled rollout deployment should not block on annotation check {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:95
Expected error:
    <*errors.errorString | 0xc42087ed70>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1067

Issues about this test specifically: #30100 #31810 #34331 #34717 #34816 #35337 #36458

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: udp {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:123
Jan 25 22:21:31.050: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.99.240.89 90
retrieved map[netserver-2:{} netserver-1:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #36271

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for node-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:59
Jan 26 04:24:22.926: Failed to find expected endpoints:
Tries 0
Command echo 'hostName' | timeout -t 2 nc -w 1 -u 10.96.3.121 8081
retrieved map[]
expected map[netserver-1:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #35283 #36867

Failed: [k8s.io] PreStop should call prestop when killing a pod [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:167
validating pre-stop.
Expected error:
    <*errors.errorString | 0xc4203aad10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pre_stop.go:159

Issues about this test specifically: #30287 #35953

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update nodePort: http [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:175
Jan 26 01:53:11.585: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://146.148.35.219:30264/hostName
retrieved map[netserver-0:{} netserver-1:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #33730 #37417

Failed: [k8s.io] Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:88
Expected error:
    <*errors.errorString | 0xc4203aad10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dashboard.go:76

Issues about this test specifically: #26191

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc4220ded00>: {
        s: "service verification failed for: 10.99.244.57\nexpected [service1-75f9f service1-7bppr service1-k4tq9]\nreceived [service1-75f9f service1-7bppr]",
    }
    service verification failed for: 10.99.244.57
    expected [service1-75f9f service1-7bppr service1-k4tq9]
    received [service1-75f9f service1-7bppr]
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:387

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Jan 26 00:45:54.989: Failed to find expected endpoints:
Tries 0
Command timeout -t 15 curl -q -s --connect-timeout 1 http://10.99.252.31:80/hostName
retrieved map[netserver-1:{} netserver-2:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:263

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Jan 26 01:03:53.342: timeout waiting 15m0s for pods size to be 3
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:285

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1252
Expected error:
    <*errors.errorString | 0xc4203aad10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2937

Issues about this test specifically: #38174

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Jan 26 05:50:05.140: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.3.124:8080/dial?request=hostName&protocol=http&host=10.99.243.86&port=80&tries=1'
retrieved map[]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #36178

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Pods should function for intra-pod communication: udp [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:45
Jan 26 03:18:23.356: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.3.97:8080/dial?request=hostName&protocol=udp&host=10.96.2.2&port=8081&tries=1'
retrieved map[]
expected map[netserver-0:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #32830

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Jan 26 00:43:24.331: Failed to find expected endpoints:
Tries 0
Command curl -q -s 'http://10.96.2.242:8080/dial?request=hostName&protocol=http&host=10.99.242.93&port=80&tries=1'
retrieved map[netserver-0:{} netserver-1:{}]
expected map[netserver-0:{} netserver-1:{} netserver-2:{}]

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:210

Issues about this test specifically: #34104

Failed: [k8s.io] Services should be able to change the type and ports of a service [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:808
Jan 25 22:56:02.359: Could not reach UDP service through 146.148.35.219:31193 after 5m0s: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2542

Issues about this test specifically: #26134

Failed: [k8s.io] DNS should provide DNS for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:400
Expected error:
    <*errors.errorString | 0xc4203aad10>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/dns.go:219

Issues about this test specifically: #26168 #27450

Failed: [k8s.io] Services should create endpoints for unready pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1185
Jan 26 03:46:45.444: expected un-ready endpoint for Service slow-terminating-unready-pod within 5m0s, stdout: 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1123

Issues about this test specifically: #26172

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/222/ Multiple broken tests:

Failed: [k8s.io] InitContainer should invoke init containers on a RestartNever pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:99
Expected
    <*errors.errorString | 0xc4203d2e20>: {
        s: "timed out waiting for the condition",
    }
to be nil
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:84

Issues about this test specifically: #31936

Failed: [k8s.io] Dynamic provisioning [k8s.io] DynamicProvisioner should create and delete persistent volumes [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:136
Expected error:
    <*errors.errorString | 0xc42249d710>: {
        s: "gave up waiting for pod 'pvc-volume-tester-rcjdf' to be 'success or failure' after 15m0s",
    }
    gave up waiting for pod 'pvc-volume-tester-rcjdf' to be 'success or failure' after 15m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/volume_provisioning.go:232

Issues about this test specifically: #32185 #32372 #36494

Failed: [k8s.io] V1Job should run a job to completion when tasks sometimes fail and are locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:82
Expected error:
    <*errors.errorString | 0xc4203d2e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:81

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:437
Container should have service environment variables set
Expected error:
    <*errors.errorString | 0xc421cdd960>: {
        s: "expected pod \"client-envvars-235445e6-e652-11e6-bb44-0242ac110002\" success: gave up waiting for pod 'client-envvars-235445e6-e652-11e6-bb44-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "client-envvars-235445e6-e652-11e6-bb44-0242ac110002" success: gave up waiting for pod 'client-envvars-235445e6-e652-11e6-bb44-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:436

Issues about this test specifically: #33985

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:137
Expected error:
    <*errors.errorString | 0xc4203d2e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:124

Issues about this test specifically: #29511 #29987 #30238 #38364

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/volumes.go:393
Expected error:
    <*errors.errorString | 0xc4203d2e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #36970

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:141
Expected error:
    <*errors.errorString | 0xc4203d2e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:103

Issues about this test specifically: #28984 #33827 #36917

Failed: [k8s.io] Networking should provide Internet connection for containers [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:49
Expected error:
    <*errors.errorString | 0xc422c2acf0>: {
        s: "gave up waiting for pod 'wget-test' to be 'success or failure' after 5m0s",
    }
    gave up waiting for pod 'wget-test' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:48

Issues about this test specifically: #26171 #28188

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:422
Expected error:
    <*errors.errorString | 0xc4203d2e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:395

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] Services should be able to up and down services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:356
Expected error:
    <*errors.errorString | 0xc4203d2e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1730

Issues about this test specifically: #26128 #26685 #33408 #36298

Failed: [k8s.io] Services should only allow access from service loadbalancer source ranges [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1252
Expected error:
    <*errors.errorString | 0xc4203d2e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:1730

Issues about this test specifically: #38174

Failed: UpgradeTest {e2e.go}

exit status 1

Issues about this test specifically: #37745 #40486

Failed: [k8s.io] Job should run a job to completion when tasks sometimes fail and are not locally restarted {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:96
Expected error:
    <*errors.errorString | 0xc4203d2e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:95

Issues about this test specifically: #31498 #33896 #35507

Failed: [k8s.io] Job should run a job to completion when tasks succeed {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:59
Expected error:
    <*errors.errorString | 0xc4203d2e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:58

Issues about this test specifically: #31938

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/empty_dir_wrapper.go:170
Expected error:
    <*errors.errorString | 0xc4203d2e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #32945

Failed: [k8s.io] Pods should support retrieving logs from the container over websockets {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:564
Expected error:
    <*errors.errorString | 0xc4203d2e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:67

Issues about this test specifically: #30263

Failed: [k8s.io] Upgrade [Feature:Upgrade] [k8s.io] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/cluster_upgrade.go:101
Jan 29 05:07:04.815: Failed waiting for pods to be running: Timeout waiting for 1 pods to be ready
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:2649

Issues about this test specifically: #38172

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:362
Expected error:
    <*errors.errorString | 0xc4203d2e20>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:338

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: [k8s.io] Downward API should provide pod IP as an env var [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:83
Expected error:
    <*errors.errorString | 0xc421eeae60>: {
        s: "expected pod \"downward-api-34abff44-e65e-11e6-bb44-0242ac110002\" success: gave up waiting for pod 'downward-api-34abff44-e65e-11e6-bb44-0242ac110002' to be 'success or failure' after 5m0s",
    }
    expected pod "downward-api-34abff44-e65e-11e6-bb44-0242ac110002" success: gave up waiting for pod 'downward-api-34abff44-e65e-11e6-bb44-0242ac110002' to be 'success or failure' after 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2167
k8s-github-robot commented 7 years ago

https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.3-gci-1.5-upgrade-cluster-new/247/ Multiple broken tests:

Failed: [k8s.io] EmptyDir wrapper volumes should not conflict {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:35:15.007: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421697400), (*api.Node)(0xc4216978f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32467 #36276

Failed: [k8s.io] Downward API volume should provide container's memory request [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 16:08:55.752: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420bff400), (*api.Node)(0xc420bff8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should update endpoints: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:152
Expected error:
    <*errors.errorString | 0xc4203adbe0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #33887

Failed: [k8s.io] Downward API volume should provide container's cpu limit [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 14:44:19.324: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a23400), (*api.Node)(0xc421a238f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36694

Failed: [k8s.io] GCP Volumes [k8s.io] NFSv4 should be mountable for NFSv4 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 17:46:48.913: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a9a000), (*api.Node)(0xc421a9a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36970

Failed: [k8s.io] Services should work after restarting apiserver [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:451
Expected error:
    <*errors.errorString | 0xc4218fedb0>: {
        s: "service verification failed for: 10.99.250.184\nexpected [service1-7p5lk service1-lxxfm service1-n7bn3]\nreceived []",
    }
    service verification failed for: 10.99.250.184
    expected [service1-7p5lk service1-lxxfm service1-n7bn3]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:428

Issues about this test specifically: #28257 #29159 #29449 #32447 #37508

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:38:32.551: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421676a00), (*api.Node)(0xc421676ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28426 #32168 #33756 #34797

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 15:51:48.223: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a92000), (*api.Node)(0xc421a924f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26139 #28342 #28439 #31574 #36576

Failed: [k8s.io] MetricsGrabber should grab all metrics from a Kubelet. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:17:38.350: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42170ca00), (*api.Node)(0xc42170cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27295 #35385 #36126 #37452 #37543

Failed: [k8s.io] Variable Expansion should allow substituting values in a container's args [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:48:09.553: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a22000), (*api.Node)(0xc421a224f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28503

Failed: [k8s.io] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:14:28.097: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a27400), (*api.Node)(0xc421a278f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27195

Failed: [k8s.io] Deployment RecreateDeployment should delete old pods and create new ones {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 16:05:43.595: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421561400), (*api.Node)(0xc4215618f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29197 #36289 #36598 #38528

Failed: [k8s.io] Deployment deployment should support rollback when there's replica set with no revision {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 21:46:01.674: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b9f400), (*api.Node)(0xc420b9f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #34687 #38442

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:314
Expected error:
    <*errors.errorString | 0xc4203adbe0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:282

Issues about this test specifically: #37259

Failed: SkewTest {e2e.go}

exit status 1

Issues about this test specifically: #38660

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 15:54:58.530: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421210000), (*api.Node)(0xc4212104f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29050

Failed: [k8s.io] HostPath should support r/w {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:20:50.498: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421bfe000), (*api.Node)(0xc421bfe4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] ResourceQuota should create a ResourceQuota and capture the life of a service. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:05:41.661: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d1aa00), (*api.Node)(0xc420d1aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29040 #35756

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl create quota should reject quota with invalid scopes {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 16:12:06.064: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421289400), (*api.Node)(0xc4212898f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Stateful Set recreate should recreate evicted statefulset {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:02:10.469: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42160ca00), (*api.Node)(0xc42160cef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 20:07:47.183: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4219e6000), (*api.Node)(0xc4219e64f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted. {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 19:54:29.291: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d1aa00), (*api.Node)(0xc420d1aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:114
Expected error:
    <*errors.errorString | 0xc4203adbe0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #32684 #36278 #37948

Failed: [k8s.io] Secrets should be consumable from pods in volume [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:57:42.013: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420b7e000), (*api.Node)(0xc420b7e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29221

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 14:41:07.178: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421283400), (*api.Node)(0xc4212838f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27532 #34567

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 20:11:03.318: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421619400), (*api.Node)(0xc4216198f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29834 #35757

Failed: [k8s.io] Pods Delete Grace Period should be submitted and removed [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:28:50.987: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216d0000), (*api.Node)(0xc4216d04f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36564

Failed: [k8s.io] ReplicationController should serve a basic image on each replica with a private image {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 20:01:24.694: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421346000), (*api.Node)(0xc4213464f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32087

Failed: [k8s.io] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 21:37:51.949: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420e1ea00), (*api.Node)(0xc420e1eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37525

Failed: [k8s.io] Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #26127 #28081

Failed: [k8s.io] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 19:28:53.416: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421677400), (*api.Node)(0xc4216778f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32945

Failed: [k8s.io] Kubectl client [k8s.io] Simple pod should support exec through an HTTP proxy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 17:14:36.348: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420664a00), (*api.Node)(0xc420664ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27156 #28979 #30489 #33649

Failed: [k8s.io] Probing container should not be restarted with a exec "cat /tmp/health" liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 17:38:53.723: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421282a00), (*api.Node)(0xc421282ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37914

Failed: [k8s.io] Proxy version v1 should proxy to cadvisor {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 20:04:34.952: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d1b400), (*api.Node)(0xc420d1b8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #35297

Failed: [k8s.io] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network_partition.go:402
Feb  6 14:06:46.012: Failed waiting for pods to enter running: timed out waiting for the condition
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/petset.go:923

Issues about this test specifically: #37373

Failed: [k8s.io] DNS should provide DNS for pods for Hostname and Subdomain Annotation {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 19:23:22.423: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421153400), (*api.Node)(0xc4211538f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28337

Failed: [k8s.io] Pods should contain environment variables for services [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 19:57:43.381: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421346a00), (*api.Node)(0xc421346ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #33985

Failed: [k8s.io] EmptyDir volumes should support (root,0777,default) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 19:20:01.818: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421bfe000), (*api.Node)(0xc421bfe4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26780

Failed: [k8s.io] V1Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 15:58:16.738: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421722000), (*api.Node)(0xc4217224f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29657

Failed: DiffResources {e2e.go}

Error: 1 leaked resources
[ disks ]
+gke-bootstrap-e2e-6fdf-pvc-6d586575-ecb6-11e6-b6c2-42010af0002b  us-central1-a  1        pd-standard  READY

Issues about this test specifically: #33373 #33416 #34060 #40437 #40454

Failed: [k8s.io] EmptyDir volumes volume on default medium should have the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 17:58:52.131: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421a9ea00), (*api.Node)(0xc421a9eef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Failed: [k8s.io] Probing container should have monotonically increasing restart count [Conformance] [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 22:02:29.059: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421940a00), (*api.Node)(0xc421940ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37314

Failed: [k8s.io] EmptyDir volumes should support (root,0666,tmpfs) [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:41:44.813: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420c1f400), (*api.Node)(0xc420c1f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37500

Failed: [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 17:33:26.779: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420905400), (*api.Node)(0xc4209058f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29521

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 2 pods to 1 pod {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 16:46:06.023: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42191e000), (*api.Node)(0xc42191e4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27196 #28998 #32403 #33341

Failed: [k8s.io] SchedulerPredicates [Serial] validates that InterPodAffinity is respected if matching with multiple Affinities {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218fe210>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #34223

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should handle healthy pet restarts during scale {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 19:34:03.858: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212a2a00), (*api.Node)(0xc4212a2ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38254

Failed: [k8s.io] DaemonRestart [Disruptive] Kubelet should not restart containers across restart {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 16:02:07.325: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421426a00), (*api.Node)(0xc421426ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27502 #28722 #32037 #38168

Failed: [k8s.io] Networking should provide unchanging, static URL paths for kubernetes api services {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:32:01.252: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421560000), (*api.Node)(0xc4215604f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #36109

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for endpoint-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:132
Expected error:
    <*errors.errorString | 0xc4203adbe0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #34104

Failed: [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 19:50:27.539: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d1aa00), (*api.Node)(0xc420d1aef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37502

Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for pod-Service: http {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:96
Expected error:
    <*errors.errorString | 0xc4203adbe0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:520

Issues about this test specifically: #36178

Failed: [k8s.io] SchedulerPredicates [Serial] validates that embedding the JSON NodeAffinity setting as a string in the annotation value work {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:100
Expected error:
    <*errors.errorString | 0xc4218b0000>: {
        s: "Waiting for terminating namespaces to be deleted timed out",
    }
    Waiting for terminating namespaces to be deleted timed out
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduler_predicates.go:78

Issues about this test specifically: #29816 #30018 #33974

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality should provide basic identity {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:11:17.796: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421293400), (*api.Node)(0xc4212938f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37361 #37919

Failed: [k8s.io] Secrets should be consumable from pods in volume with mappings and Item Mode set [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 17:11:15.048: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421ac4a00), (*api.Node)(0xc421ac4ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37529

Failed: [k8s.io] Job should keep restarting failed pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 19:04:54.369: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc42149f400), (*api.Node)(0xc42149f8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28006 #28866 #29613 #36224

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] ReplicationController light Should scale from 1 pod to 2 pods {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 20:17:10.909: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4215c7400), (*api.Node)(0xc4215c78f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27443 #27835 #28900 #32512 #38549

Failed: [k8s.io] Job should scale a job up {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 19:01:06.202: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421282000), (*api.Node)(0xc4212824f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #29511 #29987 #30238 #38364

Failed: [k8s.io] Deployment lack of progress should be reported in the deployment status {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 21:49:59.919: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420d69400), (*api.Node)(0xc420d698f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #31697 #36574 #39785

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:44:57.347: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4216d0000), (*api.Node)(0xc4216d04f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28507 #29315 #35595

Failed: [k8s.io] Probing container should be restarted with a /healthz http liveness probe [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 22:06:02.318: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420a1a000), (*api.Node)(0xc420a1a4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38511

Failed: [k8s.io] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 17:50:17.500: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421936a00), (*api.Node)(0xc421936ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #28437 #29084 #29256 #29397 #36671

Failed: [k8s.io] Services should work after restarting kube-proxy [Disruptive] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:408
Expected error:
    <*errors.errorString | 0xc42183ad10>: {
        s: "service verification failed for: 10.99.242.173\nexpected [service1-dhzcc service1-k3c4j service1-vwg4v]\nreceived []",
    }
    service verification failed for: 10.99.242.173
    expected [service1-dhzcc service1-k3c4j service1-vwg4v]
    received []
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/service.go:387

Issues about this test specifically: #29514 #38288

Failed: [k8s.io] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 21:53:17.259: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421491400), (*api.Node)(0xc4214918f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #26209 #29227 #32132 #37516

Failed: [k8s.io] Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession [Slow] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:77
Requires at least 2 nodes
Expected
    <int>: 1
to be >=
    <int>: 2
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/pd.go:71

Issues about this test specifically: #28010 #28427 #33997 #37952

Failed: [k8s.io] [HPA] Horizontal pod autoscaling (scale resource: CPU) [k8s.io] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1 {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/horizontal_pod_autoscaling.go:52
Expected error:
    <*errors.errorString | 0xc4218b0a70>: {
        s: "Only 2 pods started out of 5",
    }
    Only 2 pods started out of 5
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/autoscaling_utils.go:352

Issues about this test specifically: #27406 #27669 #29770 #32642

Failed: [k8s.io] StatefulSet [k8s.io] Basic StatefulSet functionality Scaling should happen in predictable order and halt if any pet is unhealthy {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 17:55:39.921: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4212a2a00), (*api.Node)(0xc4212a2ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #38083

Failed: [k8s.io] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 21:42:14.373: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4206ee000), (*api.Node)(0xc4206ee4f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32668 #35405

Failed: [k8s.io] GCP Volumes [k8s.io] GlusterFS should be mountable {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 18:25:34.884: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc421940a00), (*api.Node)(0xc421940ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #37056

Failed: [k8s.io] HostPath should give a volume the correct mode [Conformance] {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 17:42:06.002: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc4214bb400), (*api.Node)(0xc4214bb8f0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #32122 #38040

Failed: [k8s.io] Deployment RollingUpdateDeployment should scale up and down in the right order {Kubernetes e2e suite}

/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142
Feb  6 14:37:56.360: All nodes should be ready after test, Not ready nodes: []*api.Node{(*api.Node)(0xc420be0a00), (*api.Node)(0xc420be0ef0)}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:438

Issues about this test specifically: #27232

roberthbailey commented 7 years ago

Closing as obsolete.