openshift / origin

Conformance test suite for OpenShift
http://www.openshift.org
Apache License 2.0
8.49k stars 4.7k forks source link

Extended.[Conformance][networking][router] router headers The HAProxy router should set Forwarded headers appropriately #15692

Closed soltysh closed 6 years ago

soltysh commented 7 years ago
Stacktrace

/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:106
Expected error:
    <*errors.errorString | 0xc421dc0080>: {
        s: "last response from server was not 200:\n",
    }
    last response from server was not 200:

not to have occurred
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:80

Standard Output

[BeforeEach] [Top Level]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/test.go:52
[BeforeEach] [Conformance][networking][router] router headers
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:130
STEP: Creating a kubernetes client
Aug  9 03:21:16.103: INFO: >>> kubeConfig: /etc/origin/master/admin.kubeconfig
STEP: Building a namespace api object
Aug  9 03:21:16.130: INFO: configPath is now "/tmp/extended-test-router-headers-6fds1-bdrgw-user.kubeconfig"
Aug  9 03:21:16.130: INFO: The user is now "extended-test-router-headers-6fds1-bdrgw-user"
Aug  9 03:21:16.130: INFO: Creating project "extended-test-router-headers-6fds1-bdrgw"
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [Conformance][networking][router] router headers
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:39
[It] should set Forwarded headers appropriately
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/router/headers.go:106
Aug  9 03:21:16.206: INFO: Creating new exec pod
STEP: creating an http echo server from a config file "/tmp/fixture-testdata-dir142453635/test/extended/testdata/router-http-echo-server.yaml"
Aug  9 03:21:26.227: INFO: Running 'oc create --config=/tmp/extended-test-router-headers-6fds1-bdrgw-user.kubeconfig --namespace=extended-test-router-headers-6fds1-bdrgw -f /tmp/fixture-testdata-dir142453635/test/extended/testdata/router-http-echo-server.yaml'
deploymentconfig "router-http-echo" created
service "router-http-echo" created
route "router-http-echo" created
STEP: waiting for the healthz endpoint to respond
Aug  9 03:21:27.499: INFO: Running '/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/kubectl --server=https://ip-172-18-3-54.ec2.internal:8443 --kubeconfig=/etc/origin/master/admin.kubeconfig exec --namespace=extended-test-router-headers-6fds1-bdrgw execpod -- /bin/sh -c 
        set -e
        for i in $(seq 1 180); do
            code=$( curl -k -s -o /dev/null -w '%{http_code}\n' --header 'Host: 172.30.68.147' "http://172.30.68.147:1936/healthz" ) || rc=$?
            if [[ "${rc:-0}" -eq 0 ]]; then
                echo $code
                if [[ $code -eq 200 ]]; then
                    exit 0
                fi
                if [[ $code -ne 503 ]]; then
                    exit 1
                fi
            else
                echo "error ${rc}" 1>&2
            fi
            sleep 1
        done
        '
Aug  9 03:27:29.401: INFO: stderr: "error 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\nerror 7\n"
Aug  9 03:27:29.415: INFO: Weighted Router test [Conformance][networking][router] router headers The HAProxy router should set Forwarded headers appropriately logs:

[AfterEach] [Conformance][networking][router] router headers
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:131
STEP: Collecting events from namespace "extended-test-router-headers-6fds1-bdrgw".
STEP: Found 18 events.
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:16 -0400 EDT - event for execpod: {default-scheduler } Scheduled: Successfully assigned execpod to ip-172-18-3-54.ec2.internal
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:17 -0400 EDT - event for execpod: {kubelet ip-172-18-3-54.ec2.internal} SuccessfulMountVolume: MountVolume.SetUp succeeded for volume "default-token-4nfdm" 
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:20 -0400 EDT - event for execpod: {kubelet ip-172-18-3-54.ec2.internal} Pulled: Container image "gcr.io/google_containers/hostexec:1.2" already present on machine
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:21 -0400 EDT - event for execpod: {kubelet ip-172-18-3-54.ec2.internal} Created: Created container
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:22 -0400 EDT - event for execpod: {kubelet ip-172-18-3-54.ec2.internal} Started: Started container
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:26 -0400 EDT - event for router-http-echo: {deploymentconfig-controller } DeploymentCreated: Created new replication controller "router-http-echo-1" for version 1
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:26 -0400 EDT - event for router-http-echo-1-deploy: {default-scheduler } Scheduled: Successfully assigned router-http-echo-1-deploy to ip-172-18-3-54.ec2.internal
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:27 -0400 EDT - event for router-http-echo-1-deploy: {kubelet ip-172-18-3-54.ec2.internal} SuccessfulMountVolume: MountVolume.SetUp succeeded for volume "deployer-token-gp4ts" 
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:30 -0400 EDT - event for router-http-echo-1-deploy: {kubelet ip-172-18-3-54.ec2.internal} Pulled: Container image "openshift/origin-deployer:b7d564a" already present on machine
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:31 -0400 EDT - event for router-http-echo-1-deploy: {kubelet ip-172-18-3-54.ec2.internal} Created: Created container
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:31 -0400 EDT - event for router-http-echo-1-deploy: {kubelet ip-172-18-3-54.ec2.internal} Started: Started container
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:33 -0400 EDT - event for router-http-echo-1: {replication-controller } SuccessfulCreate: Created pod: router-http-echo-1-9q6hb
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:33 -0400 EDT - event for router-http-echo-1-9q6hb: {kubelet ip-172-18-3-54.ec2.internal} SuccessfulMountVolume: MountVolume.SetUp succeeded for volume "default-token-4nfdm" 
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:33 -0400 EDT - event for router-http-echo-1-9q6hb: {default-scheduler } Scheduled: Successfully assigned router-http-echo-1-9q6hb to ip-172-18-3-54.ec2.internal
Aug  9 03:27:29.419: INFO: At 2017-08-09 03:21:35 -0400 EDT - event for router-http-echo-1-9q6hb: {kubelet ip-172-18-3-54.ec2.internal} Pulling: pulling image "openshift/origin-base"
Aug  9 03:27:29.420: INFO: At 2017-08-09 03:21:36 -0400 EDT - event for router-http-echo-1-9q6hb: {kubelet ip-172-18-3-54.ec2.internal} Pulled: Successfully pulled image "openshift/origin-base"
Aug  9 03:27:29.420: INFO: At 2017-08-09 03:21:37 -0400 EDT - event for router-http-echo-1-9q6hb: {kubelet ip-172-18-3-54.ec2.internal} Started: Started container
Aug  9 03:27:29.420: INFO: At 2017-08-09 03:21:37 -0400 EDT - event for router-http-echo-1-9q6hb: {kubelet ip-172-18-3-54.ec2.internal} Created: Created container
Aug  9 03:27:29.426: INFO: POD                          NODE                         PHASE    GRACE  CONDITIONS
Aug  9 03:27:29.426: INFO: docker-registry-2-73nxj      ip-172-18-3-54.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 02:47:18 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 02:47:28 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 02:47:18 -0400 EDT  }]
Aug  9 03:27:29.426: INFO: registry-console-1-zv8gl     ip-172-18-3-54.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 02:47:36 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 02:47:42 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 02:47:36 -0400 EDT  }]
Aug  9 03:27:29.426: INFO: router-2-2bmfn               ip-172-18-3-54.ec2.internal  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 02:47:17 -0400 EDT  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2017-08-09 02:47:17 -0400 EDT ContainersNotReady containers with unready status: [router]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 02:47:17 -0400 EDT  }]
Aug  9 03:27:29.426: INFO: execpod                      ip-172-18-3-54.ec2.internal  Running  1s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 03:21:16 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 03:21:24 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 03:21:16 -0400 EDT  }]
Aug  9 03:27:29.426: INFO: router-http-echo-1-9q6hb     ip-172-18-3-54.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 03:21:33 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 03:21:38 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 03:21:33 -0400 EDT  }]
Aug  9 03:27:29.426: INFO: prometheus-1552260379-s1vt3  ip-172-18-3-54.ec2.internal  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 03:17:43 -0400 EDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 03:18:01 -0400 EDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2017-08-09 03:17:43 -0400 EDT  }]
Aug  9 03:27:29.426: INFO: 
Aug  9 03:27:29.428: INFO: 
Logging node info for node ip-172-18-3-54.ec2.internal
Aug  9 03:27:29.431: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ip-172-18-3-54.ec2.internal,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/ip-172-18-3-54.ec2.internal,UID:9b548538-7ccb-11e7-a906-0e47f3fb0d68,ResourceVersion:26870,Generation:0,CreationTimestamp:2017-08-09 02:25:58 -0400 EDT,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: ip-172-18-3-54.ec2.internal,region: infra,zone: default,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:,ExternalID:ip-172-18-3-54.ec2.internal,ProviderID:aws:////i-0a49165c051538341,Unschedulable:false,Taints:[],},Status:NodeStatus{Capacity:ResourceList{alpha.kubernetes.io/nvidia-gpu: {{0 0} {<nil>} 0 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},memory: {{16657121280 0} {<nil>}  BinarySI},pods: {{40 0} {<nil>} 40 DecimalSI},},Allocatable:ResourceList{alpha.kubernetes.io/nvidia-gpu: {{0 0} {<nil>} 0 DecimalSI},cpu: {{4 0} {<nil>} 4 DecimalSI},memory: {{16552263680 0} {<nil>}  BinarySI},pods: {{40 0} {<nil>} 40 DecimalSI},},Phase:,Conditions:[{OutOfDisk False 2017-08-09 03:27:23 -0400 EDT 2017-08-09 02:25:58 -0400 EDT KubeletHasSufficientDisk kubelet has sufficient disk space available} {MemoryPressure False 2017-08-09 03:27:23 -0400 EDT 2017-08-09 02:25:58 -0400 EDT KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2017-08-09 03:27:23 -0400 EDT 2017-08-09 02:25:58 -0400 EDT KubeletHasNoDiskPressure kubelet has no disk pressure} {Ready True 2017-08-09 03:27:23 -0400 EDT 2017-08-09 02:47:05 -0400 EDT KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.18.3.54} {Hostname ip-172-18-3-54.ec2.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f9370ed252a14f73b014c1301a9b6d1b,SystemUUID:EC2DA6CE-EEFB-F74C-18FF-DD85F05713E6,BootID:d4fb1edc-371d-41d2-acef-4c13e56f0c44,KernelVersion:3.10.0-693.el7.x86_64,OSImage:Red Hat Enterprise Linux Server 7.4 (Maipo),ContainerRuntimeVersion:docker://1.12.6,KubeletVersion:v1.7.0+695f48a16f,KubeProxyVersion:v1.7.0+695f48a16f,OperatingSystem:linux,Architecture:amd64,},Images:[{[openshift/origin-federation:b7d564a openshift/origin-federation:latest] 1270164718} {[openshift/openvswitch:b7d564a openshift/openvswitch:latest] 1232219765} {[openshift/node:b7d564a openshift/node:latest] 1230538038} {[openshift/origin-docker-registry:b7d564a openshift/origin-docker-registry:latest] 1133203475} {[openshift/origin-gitserver:b7d564a openshift/origin-gitserver:latest] 1120043408} {[openshift/origin-keepalived-ipfailover:b7d564a openshift/origin-keepalived-ipfailover:latest] 1075748073} {[docker.io/openshift/origin-haproxy-router@sha256:91674e052a1c59c81586e59073a69b9783f087c7eb1218d6f87e20e43acc3f1e docker.io/openshift/origin-haproxy-router:latest] 1072860597} {[openshift/origin-service-catalog:b7d564a openshift/origin-service-catalog:latest] 1062911689} {[openshift/origin-f5-router:b7d564a openshift/origin-f5-router:latest] 1048958350} {[openshift/origin-docker-builder:b7d564a openshift/origin-docker-builder:latest] 1048958350} {[openshift/origin-recycler:b7d564a openshift/origin-recycler:latest] 1048958350} {[openshift/origin-deployer:b7d564a openshift/origin-deployer:latest] 1048958350} {[openshift/origin-sti-builder:b7d564a openshift/origin-sti-builder:latest] 1048958350} {[openshift/origin:b7d564a openshift/origin:latest] 1048958350} {[openshift/origin-cluster-capacity:b7d564a openshift/origin-cluster-capacity:latest] 1002702329} {[docker.io/openshift/origin-release@sha256:d1799794658fe66335ea5686c8f8ada5d712ee3f1aa0c4959ea8480066c2d09d docker.io/openshift/origin-release:golang-1.8] 889465100} {[docker.io/openshift/origin-release@sha256:a7f7c6f9a752a5409aa6b9c7811c3225e30d1192985dabdb098237e02384ceb6] 889446165} {[docker.io/openshift/origin-release@sha256:5e6b255ef4d920669d12325aa6c440ceee9281d76da355698efaff7eb7f71296 docker.io/openshift/origin-release:golang-1.7] 865942438} {[docker.io/centos/mongodb-32-centos7@sha256:af10a9eae4ef9ded3b1567d43ac2ce7d8c449fcb48204e9b1e094d9dd53f95d0] 798403396} {[docker.io/openshift/origin-gce@sha256:44adf81e6b3a7592f61f4ae21b86d77a2cfa8750045be9336de2490062d58200 docker.io/openshift/origin-gce:latest] 786472427} {[docker.io/openshift/origin-haproxy-router@sha256:c9374f410e32907be1fa1d14d77e58206ef0be949a63a635e6f3bafa77b35726 docker.io/openshift/origin-haproxy-router:v1.5.1] 738600544} {[docker.io/openshift/origin-deployer@sha256:77ac551235d8edf43ccb2fbd8fa5384ad9d8b94ba726f778fced18710c5f74f0 docker.io/openshift/origin-deployer:v1.5.1] 617474229} {[172.30.67.242:5000/extended-test-build-valuefrom-qbdp5-kvnl2/test@sha256:c6fef076b2d7237c22900f4f926c63d9e70588d8aa775ccf8f333af536cb0124 172.30.67.242:5000/extended-test-build-valuefrom-qbdp5-kvnl2/test:latest] 557899801} {[docker.io/centos/php-70-centos7@sha256:4c0591344bcbab6a99f19cbd1596d6cdfbf09bc41ab5bbf5af49a7dfa82b38d9 docker.io/centos/php-70-centos7:latest] 539245141} {[172.30.67.242:5000/extended-test-build-valuefrom-d14qv-s34np/test@sha256:85a151f386d52658f0c0157e6dfd72506364438dc298195e573f5e75e991c5d9 172.30.67.242:5000/extended-test-build-valuefrom-d14qv-s34np/test:latest] 539237583} {[172.30.67.242:5000/extended-test-new-app-djwn6-t6sbz/a234567890123456789012345678901234567890123456789012345678@sha256:728a285c1732d93e38b4db23901101b1b454bf23f3c28806652110a98c023e26 172.30.67.242:5000/extended-test-new-app-djwn6-t6sbz/a234567890123456789012345678901234567890123456789012345678:latest] 526690644} {[docker.io/centos/nodejs-6-centos7@sha256:5104faa4de648eab31fe4311bbe1f55be582c80ea5c30321cc9c113c75d566f0] 513113856} {[docker.io/centos/ruby-23-centos7@sha256:06df436ee1ab911cf5bd6fc296c6296e29c52639b1c8e0209dbcbc227c8cd7e6] 506419593} {[docker.io/centos/ruby-22-centos7@sha256:32457883fd1522458c0022c7add91a74fc1591829b222f5b38449fa5ece7a349 docker.io/centos/ruby-22-centos7:latest] 491762288} {[docker.io/centos/s2i-base-centos7@sha256:d71b4111861c794aa126c5074334a9c96baf00ee2b5ec385aa9647e02f2696be docker.io/centos/s2i-base-centos7:latest] 429894857} {[docker.io/openshift/origin-docker-registry@sha256:cfe82b08f94c015d31664573f3caa4307ffc7941c930cc7ae9419d68cec32ed5 docker.io/openshift/origin-docker-registry:v1.5.1] 428360819} {[docker.io/cockpit/kubernetes@sha256:faa05392a72bf0a4f71bd6cd1c192cedd824da32cef99410b649bf9a9aa2bbc3 docker.io/cockpit/kubernetes:latest] 407609178} {[openshift/origin-egress-http-proxy:b7d564a openshift/origin-egress-http-proxy:latest] 395944041} {[docker.io/openshift/origin-base@sha256:3848ab52436662e4193f34063bbfd259c0c09cbe91562acec7dd6eb510ca2e94 docker.io/openshift/origin-base:latest] 363024868} {[openshift/origin-base:b7d564a openshift/origin-base:latest] 363016198} {[docker.io/openshift/prometheus@sha256:856ae17355bf635aa8a741fa717a3d1162df961bd0d245a1adc07489be886f52 docker.io/openshift/prometheus:v2.0.0-dev] 267619474} {[docker.io/openshift/oauth-proxy@sha256:ba567d720ed6e878d5ccba5b2545f2753501df81233b420b656c770c62260372 docker.io/openshift/oauth-proxy:v1.0.0] 231232415} {[docker.io/openshift/origin-pod@sha256:e6558855325bf5c4a96b5fe83e7c83e342bc12582e28148d167049d071c90e22 docker.io/openshift/origin-pod:latest] 213178417} {[openshift/origin-pod:b7d564a openshift/origin-pod:latest] 213178385} {[openshift/origin-source:b7d564a openshift/origin-source:latest] 192503646} {[docker.io/centos@sha256:26f74cefad82967f97f3eeeef88c1b6262f9b42bc96f2ad61d6f3fdf544759b8 docker.io/centos:7 docker.io/centos:centos7 docker.io/centos:latest] 192503276} {[172.30.67.242:5000/extended-test-docker-build-pullsecret-j9tr9-d6390/image1@sha256:e5853b8113705fd6a4b7167ca001e440924feb65924a8f24c69c5faf1a7dc17f 172.30.67.242:5000/extended-test-docker-build-pullsecret-j9tr9-d6390/image1:latest] 192503276} {[gcr.io/google_containers/jessie-dnsutils@sha256:2460d596912244b5f8973573f7150e7264b570015f4becc2d0096f0bd1d17e36 gcr.io/google_containers/jessie-dnsutils:e2e] 190122856} {[gcr.io/google_containers/nginx-slim@sha256:8b4501fe0fe221df663c22e16539f399e89594552f400408303c42f3dd8d0e52 gcr.io/google_containers/nginx-slim:0.8] 110461313} {[docker.io/nginx@sha256:788fa27763db6d69ad3444e8ba72f947df9e7e163bad7c1f5614f8fd27a311c3 docker.io/nginx:latest] 107463605} {[gcr.io/google_containers/nginx-slim@sha256:dd4efd4c13bec2c6f3fe855deeab9524efe434505568421d4f31820485b3a795 gcr.io/google_containers/nginx-slim:0.7] 86838142} {[gcr.io/google_containers/nettest@sha256:8af3a0e8b8ab906b0648dd575e8785e04c19113531f8ffbaab9e149aa1a60763 gcr.io/google_containers/nettest:1.7] 24051275} {[gcr.io/google_containers/hostexec@sha256:cab8d4e2526f8f767c64febe4ce9e0f0e58cd35fdff81b3aadba4dd041ba9f00 gcr.io/google_containers/hostexec:1.2] 13185747} {[gcr.io/google_containers/dnsutils@sha256:cd9182f6d74e616942db1cef6f25e1e54b49ba0330c2e19d3ec061f027666cc0 gcr.io/google_containers/dnsutils:e2e] 8893907} {[gcr.io/google_containers/netexec@sha256:56c53846f44ea214e4aa5df37c9c50331f0b09e64a32cc7cf17c7e1808d38eef gcr.io/google_containers/netexec:1.7] 8016035}],VolumesInUse:[],VolumesAttached:[],},}
Aug  9 03:27:29.431: INFO: 
Logging kubelet events for node ip-172-18-3-54.ec2.internal
Aug  9 03:27:29.434: INFO: 
Logging pods the kubelet thinks is on node ip-172-18-3-54.ec2.internal
Aug  9 03:27:29.441: INFO: registry-console-1-zv8gl started at 2017-08-09 02:47:36 -0400 EDT (0+1 container statuses recorded)
Aug  9 03:27:29.441: INFO:  Container registry-console ready: true, restart count 0
Aug  9 03:27:29.441: INFO: router-http-echo-1-9q6hb started at 2017-08-09 03:21:33 -0400 EDT (0+1 container statuses recorded)
Aug  9 03:27:29.441: INFO:  Container router-http-echo ready: true, restart count 0
Aug  9 03:27:29.441: INFO: router-2-2bmfn started at 2017-08-09 02:47:17 -0400 EDT (0+1 container statuses recorded)
Aug  9 03:27:29.441: INFO:  Container router ready: false, restart count 0
Aug  9 03:27:29.441: INFO: prometheus-1552260379-s1vt3 started at 2017-08-09 03:17:43 -0400 EDT (0+2 container statuses recorded)
Aug  9 03:27:29.441: INFO:  Container oauth-proxy ready: true, restart count 0
Aug  9 03:27:29.441: INFO:  Container prometheus ready: true, restart count 0
Aug  9 03:27:29.441: INFO: docker-registry-2-73nxj started at 2017-08-09 02:47:18 -0400 EDT (0+1 container statuses recorded)
Aug  9 03:27:29.441: INFO:  Container registry ready: true, restart count 0
Aug  9 03:27:29.441: INFO: execpod started at 2017-08-09 03:21:16 -0400 EDT (0+1 container statuses recorded)
Aug  9 03:27:29.441: INFO:  Container hostexec ready: true, restart count 0
Aug  9 03:27:29.459: INFO: 
Latency metrics for node ip-172-18-3-54.ec2.internal
Aug  9 03:27:29.459: INFO: {Operation:stop_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.271468s}
Aug  9 03:27:29.459: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:10.369484s}
STEP: Dumping a list of prepulled images on each node...
Aug  9 03:27:29.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "extended-test-router-headers-6fds1-bdrgw" for this suite.
Aug  9 03:27:53.575: INFO: namespace: extended-test-router-headers-6fds1-bdrgw, resource: bindings, ignored listing per whitelist
Aug  9 03:27:53.575: INFO: namespace extended-test-router-headers-6fds1-bdrgw deletion completed in 24.111237653s

Seen in https://ci.openshift.redhat.com/jenkins/job/test_pull_request_origin_extended_conformance_install_update/3746/

openshift-bot commented 6 years ago

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

openshift-bot commented 6 years ago

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity. Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten /remove-lifecycle stale

openshift-bot commented 6 years ago

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen. Mark the issue as fresh by commenting /remove-lifecycle rotten. Exclude this issue from closing again by commenting /lifecycle frozen.

/close

sdodson commented 6 years ago

https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/openshift_openshift-ansible/8227/test_pull_request_openshift_ansible_extended_conformance_gce_39/249/

sdodson commented 6 years ago

https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/openshift_openshift-ansible/8307/test_pull_request_openshift_ansible_extended_conformance_gce_39/252/

openshift-bot commented 6 years ago

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen. Mark the issue as fresh by commenting /remove-lifecycle rotten. Exclude this issue from closing again by commenting /lifecycle frozen.

/close