k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
28.11k stars 2.35k forks source link

Investigate / resolve intermittent e2e test failures #579

Closed erikwilson closed 5 years ago

erikwilson commented 5 years ago

[Fail] [sig-api-machinery] Watchers [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/watch.go:417

[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jun 26 21:12:17.063: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8080,SelfLink:/api/v1/namespaces/watch-8080/configmaps/e2e-watch-test-configmap-a,UID:13a444f3-9857-11e9-b4db-0242ac110008,ResourceVersion:4508,Generation:0,CreationTimestamp:2019-06-26 21:12:17 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jun 26 21:12:17.065: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8080,SelfLink:/api/v1/namespaces/watch-8080/configmaps/e2e-watch-test-configmap-a,UID:13a444f3-9857-11e9-b4db-0242ac110008,ResourceVersion:4508,Generation:0,CreationTimestamp:2019-06-26 21:12:17 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jun 26 21:12:27.068: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8080,SelfLink:/api/v1/namespaces/watch-8080/configmaps/e2e-watch-test-configmap-a,UID:13a444f3-9857-11e9-b4db-0242ac110008,ResourceVersion:4589,Generation:0,CreationTimestamp:2019-06-26 21:12:17 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jun 26 21:12:27.068: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8080,SelfLink:/api/v1/namespaces/watch-8080/configmaps/e2e-watch-test-configmap-a,UID:13a444f3-9857-11e9-b4db-0242ac110008,ResourceVersion:4589,Generation:0,CreationTimestamp:2019-06-26 21:12:17 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jun 26 21:12:37.072: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8080,SelfLink:/api/v1/namespaces/watch-8080/configmaps/e2e-watch-test-configmap-a,UID:13a444f3-9857-11e9-b4db-0242ac110008,ResourceVersion:4715,Generation:0,CreationTimestamp:2019-06-26 21:12:17 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jun 26 21:12:37.072: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8080,SelfLink:/api/v1/namespaces/watch-8080/configmaps/e2e-watch-test-configmap-a,UID:13a444f3-9857-11e9-b4db-0242ac110008,ResourceVersion:4715,Generation:0,CreationTimestamp:2019-06-26 21:12:17 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jun 26 21:12:47.074: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8080,SelfLink:/api/v1/namespaces/watch-8080/configmaps/e2e-watch-test-configmap-a,UID:13a444f3-9857-11e9-b4db-0242ac110008,ResourceVersion:4829,Generation:0,CreationTimestamp:2019-06-26 21:12:17 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jun 26 21:12:47.074: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8080,SelfLink:/api/v1/namespaces/watch-8080/configmaps/e2e-watch-test-configmap-a,UID:13a444f3-9857-11e9-b4db-0242ac110008,ResourceVersion:4829,Generation:0,CreationTimestamp:2019-06-26 21:12:17 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jun 26 21:12:57.079: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8080,SelfLink:/api/v1/namespaces/watch-8080/configmaps/e2e-watch-test-configmap-b,UID:2b7de038-9857-11e9-b4db-0242ac110008,ResourceVersion:4993,Generation:0,CreationTimestamp:2019-06-26 21:12:57 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jun 26 21:12:57.079: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8080,SelfLink:/api/v1/namespaces/watch-8080/configmaps/e2e-watch-test-configmap-b,UID:2b7de038-9857-11e9-b4db-0242ac110008,ResourceVersion:4993,Generation:0,CreationTimestamp:2019-06-26 21:12:57 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jun 26 21:13:06.147: INFO: Watch closed unexpectedly

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:650

[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-9407
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9407
Jun 19 22:18:42.872: INFO: Found 0 stateful pods, waiting for 1
Jun 19 22:18:52.878: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jun 19 22:18:52.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-712913648 exec --namespace=statefulset-9407 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 19 22:18:53.045: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jun 19 22:18:53.045: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 19 22:18:53.045: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jun 19 22:18:53.052: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jun 19 22:19:03.061: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jun 19 22:19:03.061: INFO: Waiting for statefulset status.replicas updated to 0
Jun 19 22:19:03.076: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999736s
Jun 19 22:19:04.085: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989326639s
Jun 19 22:19:05.101: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98030217s
Jun 19 22:19:06.110: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.964161916s
Jun 19 22:19:07.123: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.954861748s
Jun 19 22:19:08.131: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.942399626s
Jun 19 22:19:09.142: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.93372538s
Jun 19 22:19:10.151: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.923417475s
Jun 19 22:19:11.163: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.913750845s
Jun 19 22:19:12.175: INFO: Verifying statefulset ss doesn't scale past 1 for another 902.48356ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9407
Jun 19 22:19:13.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-712913648 exec --namespace=statefulset-9407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 19 22:19:13.373: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Jun 19 22:19:13.373: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 19 22:19:13.373: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jun 19 22:19:13.384: INFO: Found 1 stateful pods, waiting for 3
Jun 19 22:19:23.394: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jun 19 22:19:23.394: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jun 19 22:19:23.394: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jun 19 22:19:23.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-712913648 exec --namespace=statefulset-9407 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 19 22:19:23.679: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jun 19 22:19:23.679: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 19 22:19:23.679: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jun 19 22:19:23.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-712913648 exec --namespace=statefulset-9407 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 19 22:19:23.920: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jun 19 22:19:23.920: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 19 22:19:23.920: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jun 19 22:19:23.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-712913648 exec --namespace=statefulset-9407 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jun 19 22:19:24.162: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Jun 19 22:19:24.162: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jun 19 22:19:24.162: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jun 19 22:19:24.162: INFO: Waiting for statefulset status.replicas updated to 0
Jun 19 22:19:24.166: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jun 19 22:19:34.179: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jun 19 22:19:34.179: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jun 19 22:19:34.179: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jun 19 22:19:34.192: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999781s
Jun 19 22:19:35.203: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.9901071s
Jun 19 22:19:36.214: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.978995551s
Jun 19 22:19:37.227: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.968036619s
Jun 19 22:19:38.237: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.955319392s
Jun 19 22:19:39.247: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.94480127s
Jun 19 22:19:40.260: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.934478177s
Jun 19 22:19:41.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.921796528s
Jun 19 22:19:42.266: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.918869232s
Jun 19 22:19:43.269: INFO: Verifying statefulset ss doesn't scale past 3 for another 916.074649ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9407
Jun 19 22:19:44.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-712913648 exec --namespace=statefulset-9407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 19 22:19:44.492: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Jun 19 22:19:44.492: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 19 22:19:44.492: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jun 19 22:19:44.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-712913648 exec --namespace=statefulset-9407 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 19 22:19:44.654: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Jun 19 22:19:44.654: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 19 22:19:44.654: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jun 19 22:19:44.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-712913648 exec --namespace=statefulset-9407 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jun 19 22:19:44.799: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Jun 19 22:19:44.799: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jun 19 22:19:44.799: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jun 19 22:19:44.799: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jun 19 22:20:14.839: INFO: Deleting all statefulset in ns statefulset-9407
Jun 19 22:20:14.844: INFO: Scaling statefulset ss to 0
Jun 19 22:20:14.852: INFO: Waiting for statefulset status.replicas updated to 0
Jun 19 22:20:14.855: INFO: Deleting statefulset ss

[Fail] [k8s.io] InitContainer [NodeConformance] [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:169

[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
STEP: creating the pod
Jun 12 21:47:38.899: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Collecting events from namespace "init-container-3103".
STEP: Found 4 events.
Jun 12 21:47:40.651: INFO: At 2019-06-12 21:47:38 +0000 UTC - event for pod-init-b29281e9-8d5b-11e9-8e86-32c33e4dd699: {default-scheduler } Scheduled: Successfully assigned init-container-3103/pod-init-b29281e9-8d5b-11e9-8e86-32c33e4dd699 to 1e195af5b5d9
Jun 12 21:47:40.651: INFO: At 2019-06-12 21:47:39 +0000 UTC - event for pod-init-b29281e9-8d5b-11e9-8e86-32c33e4dd699: {kubelet 1e195af5b5d9} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine
Jun 12 21:47:40.651: INFO: At 2019-06-12 21:47:39 +0000 UTC - event for pod-init-b29281e9-8d5b-11e9-8e86-32c33e4dd699: {kubelet 1e195af5b5d9} Created: Created container init1
Jun 12 21:47:40.651: INFO: At 2019-06-12 21:47:39 +0000 UTC - event for pod-init-b29281e9-8d5b-11e9-8e86-32c33e4dd699: {kubelet 1e195af5b5d9} Started: Started container init1
Jun 12 21:47:40.672: INFO: POD                                            NODE          PHASE    GRACE  CONDITIONS
Jun 12 21:47:40.672: INFO: sonobuoy                                       bf0b0ebe1062  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:44:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:45:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:45:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:44:55 +0000 UTC  }]
Jun 12 21:47:40.672: INFO: sonobuoy-e2e-job-30429cb045914690              1e195af5b5d9  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:45:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:45:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:45:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:44:59 +0000 UTC  }]
Jun 12 21:47:40.672: INFO: pod-init-b29281e9-8d5b-11e9-8e86-32c33e4dd699  1e195af5b5d9  Pending         [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:47:39 +0000 UTC ContainersNotInitialized containers with incomplete status: [init1 init2]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:47:39 +0000 UTC ContainersNotReady containers with unready status: [run1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:47:39 +0000 UTC ContainersNotReady containers with unready status: [run1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:47:38 +0000 UTC  }]
Jun 12 21:47:40.672: INFO: coredns-695688789-swsmn                        bf0b0ebe1062  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:44:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:44:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:44:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:44:50 +0000 UTC  }]
Jun 12 21:47:40.672: INFO: 
Jun 12 21:47:40.681: INFO: 
Logging node info for node 1e195af5b5d9
Jun 12 21:47:40.683: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:1e195af5b5d9,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/1e195af5b5d9,UID:473671a5-8d5b-11e9-b518-0242ac110007,ResourceVersion:1853,Generation:0,CreationTimestamp:2019-06-12 21:44:38 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: 1e195af5b5d9,kubernetes.io/os: linux,node-role.kubernetes.io/master: true,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"2a:21:53:aa:d5:77"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 172.17.0.7,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[wrangler.cattle.io/node],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.42.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{1040830304256 0} {<nil>} 1016435844Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{32892014592 0} {<nil>} 32121108Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{988788788268 0} {<nil>} 988788788268 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{32892014592 0} {<nil>} 32121108Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-06-12 21:47:39 +0000 UTC 2019-06-12 21:44:38 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-06-12 21:47:39 +0000 UTC 2019-06-12 21:44:38 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-06-12 21:47:39 +0000 UTC 2019-06-12 21:44:38 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-06-12 21:47:39 +0000 UTC 2019-06-12 21:44:38 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.7} {Hostname 1e195af5b5d9}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2E240B-5DB8-585A-5AA5-1E629F8F8CAD,BootID:6133f4a1-eaf1-400b-8c3a-0d8b362fdca4,KernelVersion:4.15.0-1032-aws,OSImage:Unknown,ContainerRuntimeVersion:containerd://1.2.5+unknown,KubeletVersion:v1.14.3-k3s.1,KubeProxyVersion:v1.14.3-k3s.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-containers/conformance@sha256:4defea83c5f6ee76cd4551d8b9396aad56a775b4002a5cd07b4922a7f0769dd5 gcr.io/google-containers/conformance:v1.14.3] 194995597} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[gcr.io/heptio-images/sonobuoy@sha256:e347fd5501348cedcb2a5733ce9e226baecad4681607501aae278b27e7fa8550 gcr.io/heptio-images/sonobuoy:v0.14.2] 33562031} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 16222606} {[quay.io/coreos/etcd:v3.3.10] 13133210} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 317164}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jun 12 21:47:40.683: INFO: 
Logging kubelet events for node 1e195af5b5d9
Jun 12 21:47:40.685: INFO: 
Logging pods the kubelet thinks is on node 1e195af5b5d9
Jun 12 21:47:40.687: INFO: sonobuoy-e2e-job-30429cb045914690 started at 2019-06-12 21:45:00 +0000 UTC (0+2 container statuses recorded)
Jun 12 21:47:40.687: INFO:  Container e2e ready: true, restart count 0
Jun 12 21:47:40.687: INFO:  Container sonobuoy-worker ready: true, restart count 0
Jun 12 21:47:40.687: INFO: pod-init-b29281e9-8d5b-11e9-8e86-32c33e4dd699 started at 2019-06-12 21:47:39 +0000 UTC (2+1 container statuses recorded)
Jun 12 21:47:40.687: INFO:  Init container init1 ready: false, restart count 0
Jun 12 21:47:40.687: INFO:  Init container init2 ready: false, restart count 0
Jun 12 21:47:40.687: INFO:  Container run1 ready: false, restart count 0
W0612 21:47:40.689409      38 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 12 21:47:41.009: INFO: 
Latency metrics for node 1e195af5b5d9
Jun 12 21:47:41.009: INFO: 
Logging node info for node bf0b0ebe1062
Jun 12 21:47:41.011: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:bf0b0ebe1062,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/bf0b0ebe1062,UID:48b71b11-8d5b-11e9-b518-0242ac110007,ResourceVersion:1707,Generation:0,CreationTimestamp:2019-06-12 21:44:41 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: bf0b0ebe1062,kubernetes.io/os: linux,node-role.kubernetes.io/worker: true,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"ea:0e:63:93:a4:fe"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 172.17.0.9,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[wrangler.cattle.io/node],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.42.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{1040830304256 0} {<nil>} 1016435844Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{32892014592 0} {<nil>} 32121108Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{988788788268 0} {<nil>} 988788788268 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{32892014592 0} {<nil>} 32121108Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-06-12 21:47:11 +0000 UTC 2019-06-12 21:44:41 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-06-12 21:47:11 +0000 UTC 2019-06-12 21:44:41 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-06-12 21:47:11 +0000 UTC 2019-06-12 21:44:41 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-06-12 21:47:11 +0000 UTC 2019-06-12 21:44:41 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.9} {Hostname bf0b0ebe1062}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2E240B-5DB8-585A-5AA5-1E629F8F8CAD,BootID:6133f4a1-eaf1-400b-8c3a-0d8b362fdca4,KernelVersion:4.15.0-1032-aws,OSImage:Unknown,ContainerRuntimeVersion:containerd://1.2.5+unknown,KubeletVersion:v1.14.3-k3s.1,KubeProxyVersion:v1.14.3-k3s.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/heptio-images/sonobuoy@sha256:e347fd5501348cedcb2a5733ce9e226baecad4681607501aae278b27e7fa8550 gcr.io/heptio-images/sonobuoy:v0.14.2] 33562031} {[docker.io/coredns/coredns@sha256:e030773c7fee285435ed7fc7623532ee54c4c1c4911fb24d95cd0170a8a768bc docker.io/coredns/coredns:1.3.0] 12286817} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 317164}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jun 12 21:47:41.011: INFO: 
Logging kubelet events for node bf0b0ebe1062
Jun 12 21:47:41.013: INFO: 
Logging pods the kubelet thinks is on node bf0b0ebe1062
Jun 12 21:47:41.016: INFO: coredns-695688789-swsmn started at 2019-06-12 21:44:51 +0000 UTC (0+1 container statuses recorded)
Jun 12 21:47:41.016: INFO:  Container coredns ready: true, restart count 0
Jun 12 21:47:41.016: INFO: sonobuoy started at 2019-06-12 21:44:56 +0000 UTC (0+1 container statuses recorded)
Jun 12 21:47:41.016: INFO:  Container kube-sonobuoy ready: true, restart count 0
W0612 21:47:41.017937      38 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 12 21:47:41.034: INFO: 
Latency metrics for node bf0b0ebe1062
Jun 12 21:47:41.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3103" for this suite.
Jun 12 21:48:03.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 12 21:48:03.124: INFO: namespace init-container-3103 deletion completed in 22.087612497s

• Failure [24.634 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
  should invoke init containers on a RestartAlways pod [Conformance] [It]
  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692

  Expected
      <*errors.errorString | 0xc0002b0560>: {
          s: "watch closed before UntilWithoutRetry timeout",
      }
  to be nil

  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:169

[Fail] [k8s.io] InitContainer [NodeConformance] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.14.1-beta.0.44+b7394102d6ef77/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:291

[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.14.1-beta.0.44+b7394102d6ef77/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
STEP: creating the pod
Jun 12 21:26:39.052: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.14.1-beta.0.44+b7394102d6ef77/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Collecting events from namespace "init-container-2009".
STEP: Found 7 events.
Jun 12 21:27:21.320: INFO: At 2019-06-12 21:26:39 +0000 UTC - event for pod-init-c3a503be-8d58-11e9-9b1b-7a6b0b73a232: {default-scheduler } Scheduled: Successfully assigned init-container-2009/pod-init-c3a503be-8d58-11e9-9b1b-7a6b0b73a232 to 3608bea39038
Jun 12 21:27:21.321: INFO: At 2019-06-12 21:26:40 +0000 UTC - event for pod-init-c3a503be-8d58-11e9-9b1b-7a6b0b73a232: {kubelet 3608bea39038} Pulling: Pulling image "docker.io/library/busybox:1.29"
Jun 12 21:27:21.321: INFO: At 2019-06-12 21:26:41 +0000 UTC - event for pod-init-c3a503be-8d58-11e9-9b1b-7a6b0b73a232: {kubelet 3608bea39038} Pulled: Successfully pulled image "docker.io/library/busybox:1.29"
Jun 12 21:27:21.321: INFO: At 2019-06-12 21:26:41 +0000 UTC - event for pod-init-c3a503be-8d58-11e9-9b1b-7a6b0b73a232: {kubelet 3608bea39038} Created: Created container init1
Jun 12 21:27:21.321: INFO: At 2019-06-12 21:26:42 +0000 UTC - event for pod-init-c3a503be-8d58-11e9-9b1b-7a6b0b73a232: {kubelet 3608bea39038} Started: Started container init1
Jun 12 21:27:21.321: INFO: At 2019-06-12 21:26:43 +0000 UTC - event for pod-init-c3a503be-8d58-11e9-9b1b-7a6b0b73a232: {kubelet 3608bea39038} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine
Jun 12 21:27:21.321: INFO: At 2019-06-12 21:26:45 +0000 UTC - event for pod-init-c3a503be-8d58-11e9-9b1b-7a6b0b73a232: {kubelet 3608bea39038} BackOff: Back-off restarting failed container
Jun 12 21:27:21.347: INFO: POD                                                     NODE          PHASE    GRACE  CONDITIONS
Jun 12 21:27:21.347: INFO: pod-configmaps-cd6b5988-8d58-11e9-a3b1-7a6b0b73a232     3608bea39038  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:26:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:26:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:26:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:26:55 +0000 UTC  }]
Jun 12 21:27:21.347: INFO: liveness-http                                           3608bea39038  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:26:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:26:23 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:26:23 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:26:22 +0000 UTC  }]
Jun 12 21:27:21.347: INFO: sonobuoy                                                3608bea39038  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:25:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:25:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:25:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:25:45 +0000 UTC  }]
Jun 12 21:27:21.347: INFO: sonobuoy-e2e-job-8e90fddf4cb94a11                       c3779447ed34  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:25:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:26:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:26:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:25:51 +0000 UTC  }]
Jun 12 21:27:21.347: INFO: pod-init-c3a503be-8d58-11e9-9b1b-7a6b0b73a232           3608bea39038  Pending         [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:26:39 +0000 UTC ContainersNotInitialized containers with incomplete status: [init1 init2]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:26:39 +0000 UTC ContainersNotReady containers with unready status: [run1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:26:39 +0000 UTC ContainersNotReady containers with unready status: [run1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:26:39 +0000 UTC  }]
Jun 12 21:27:21.347: INFO: coredns-695688789-hp2h7                                 c3779447ed34  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:25:39 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:25:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:25:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:25:39 +0000 UTC  }]
Jun 12 21:27:21.347: INFO: pod-submit-remove-d928bfe1-8d58-11e9-9106-7a6b0b73a232  c3779447ed34  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:27:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:27:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:27:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:27:15 +0000 UTC  }]
Jun 12 21:27:21.347: INFO: labelsupdated71ef68e-8d58-11e9-985d-7a6b0b73a232        c3779447ed34  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:27:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:27:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:27:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-12 21:27:11 +0000 UTC  }]
Jun 12 21:27:21.347: INFO: 
Jun 12 21:27:21.359: INFO: 
Logging node info for node 3608bea39038
Jun 12 21:27:21.362: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:3608bea39038,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/3608bea39038,UID:9a7e1baf-8d58-11e9-9adc-0242ac110008,ResourceVersion:1040,Generation:0,CreationTimestamp:2019-06-12 21:25:30 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: 3608bea39038,kubernetes.io/os: linux,node-role.kubernetes.io/worker: true,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"96:16:28:49:09:d1"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 172.17.0.10,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[wrangler.cattle.io/node],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.42.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{1040830304256 0} {<nil>} 1016435844Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{32892014592 0} {<nil>} 32121108Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{988788788268 0} {<nil>} 988788788268 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{32892014592 0} {<nil>} 32121108Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-06-12 21:27:00 +0000 UTC 2019-06-12 21:25:30 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-06-12 21:27:00 +0000 UTC 2019-06-12 21:25:30 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-06-12 21:27:00 +0000 UTC 2019-06-12 21:25:30 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-06-12 21:27:00 +0000 UTC 2019-06-12 21:25:30 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.10} {Hostname 3608bea39038}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2E240B-5DB8-585A-5AA5-1E629F8F8CAD,BootID:6133f4a1-eaf1-400b-8c3a-0d8b362fdca4,KernelVersion:4.15.0-1032-aws,OSImage:Unknown,ContainerRuntimeVersion:containerd://1.2.5+unknown,KubeletVersion:v1.14.1-k3s.4,KubeProxyVersion:v1.14.1-k3s.4,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/heptio-images/sonobuoy@sha256:e347fd5501348cedcb2a5733ce9e226baecad4681607501aae278b27e7fa8550 gcr.io/heptio-images/sonobuoy:v0.14.2] 33562031} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 317164}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jun 12 21:27:21.362: INFO: 
Logging kubelet events for node 3608bea39038
Jun 12 21:27:21.366: INFO: 
Logging pods the kubelet thinks is on node 3608bea39038
Jun 12 21:27:21.370: INFO: sonobuoy started at 2019-06-12 21:25:45 +0000 UTC (0+1 container statuses recorded)
Jun 12 21:27:21.370: INFO:  Container kube-sonobuoy ready: true, restart count 0
Jun 12 21:27:21.370: INFO: liveness-http started at 2019-06-12 21:26:22 +0000 UTC (0+1 container statuses recorded)
Jun 12 21:27:21.370: INFO:  Container liveness ready: true, restart count 0
Jun 12 21:27:21.370: INFO: pod-configmaps-cd6b5988-8d58-11e9-a3b1-7a6b0b73a232 started at 2019-06-12 21:26:55 +0000 UTC (0+1 container statuses recorded)
Jun 12 21:27:21.370: INFO:  Container configmap-volume-test ready: true, restart count 0
Jun 12 21:27:21.370: INFO: pod-init-c3a503be-8d58-11e9-9b1b-7a6b0b73a232 started at 2019-06-12 21:26:39 +0000 UTC (2+1 container statuses recorded)
Jun 12 21:27:21.370: INFO:  Init container init1 ready: false, restart count 2
Jun 12 21:27:21.370: INFO:  Init container init2 ready: false, restart count 0
Jun 12 21:27:21.370: INFO:  Container run1 ready: false, restart count 0
W0612 21:27:21.374424      22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 12 21:27:21.415: INFO: 
Latency metrics for node 3608bea39038
Jun 12 21:27:21.415: INFO: 
Logging node info for node c3779447ed34
Jun 12 21:27:21.417: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:c3779447ed34,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/c3779447ed34,UID:98fa0666-8d58-11e9-9adc-0242ac110008,ResourceVersion:1020,Generation:0,CreationTimestamp:2019-06-12 21:25:27 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: c3779447ed34,kubernetes.io/os: linux,node-role.kubernetes.io/master: true,},Annotations:map[string]string{flannel.alpha.coreos.com/backend-data: {"VtepMAC":"de:eb:5d:30:4b:0e"},flannel.alpha.coreos.com/backend-type: vxlan,flannel.alpha.coreos.com/kube-subnet-manager: true,flannel.alpha.coreos.com/public-ip: 172.17.0.8,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[wrangler.cattle.io/node],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.42.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{1040830304256 0} {<nil>} 1016435844Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{32892014592 0} {<nil>} 32121108Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{988788788268 0} {<nil>} 988788788268 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{32892014592 0} {<nil>} 32121108Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2019-06-12 21:26:57 +0000 UTC 2019-06-12 21:25:27 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-06-12 21:26:57 +0000 UTC 2019-06-12 21:25:27 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-06-12 21:26:57 +0000 UTC 2019-06-12 21:25:27 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-06-12 21:26:57 +0000 UTC 2019-06-12 21:25:27 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.8} {Hostname c3779447ed34}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:EC2E240B-5DB8-585A-5AA5-1E629F8F8CAD,BootID:6133f4a1-eaf1-400b-8c3a-0d8b362fdca4,KernelVersion:4.15.0-1032-aws,OSImage:Unknown,ContainerRuntimeVersion:containerd://1.2.5+unknown,KubeletVersion:v1.14.1-k3s.4,KubeProxyVersion:v1.14.1-k3s.4,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-containers/conformance@sha256:b08804b6c6c6d69c05776b301e940e8539f0a7395943b02ddb1863f1a2541b2c gcr.io/google-containers/conformance:v1.14.1] 194970529} {[gcr.io/heptio-images/sonobuoy@sha256:e347fd5501348cedcb2a5733ce9e226baecad4681607501aae278b27e7fa8550 gcr.io/heptio-images/sonobuoy:v0.14.2] 33562031} {[docker.io/coredns/coredns@sha256:e030773c7fee285435ed7fc7623532ee54c4c1c4911fb24d95cd0170a8a768bc docker.io/coredns/coredns:1.3.0] 12286817} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 317164}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Jun 12 21:27:21.417: INFO: 
Logging kubelet events for node c3779447ed34
Jun 12 21:27:21.421: INFO: 
Logging pods the kubelet thinks is on node c3779447ed34
Jun 12 21:27:21.424: INFO: coredns-695688789-hp2h7 started at 2019-06-12 21:25:39 +0000 UTC (0+1 container statuses recorded)
Jun 12 21:27:21.424: INFO:  Container coredns ready: true, restart count 0
Jun 12 21:27:21.424: INFO: sonobuoy-e2e-job-8e90fddf4cb94a11 started at 2019-06-12 21:25:51 +0000 UTC (0+2 container statuses recorded)
Jun 12 21:27:21.424: INFO:  Container e2e ready: true, restart count 0
Jun 12 21:27:21.424: INFO:  Container sonobuoy-worker ready: true, restart count 0
Jun 12 21:27:21.424: INFO: labelsupdated71ef68e-8d58-11e9-985d-7a6b0b73a232 started at 2019-06-12 21:27:11 +0000 UTC (0+1 container statuses recorded)
Jun 12 21:27:21.424: INFO:  Container client-container ready: true, restart count 0
Jun 12 21:27:21.424: INFO: pod-submit-remove-d928bfe1-8d58-11e9-9106-7a6b0b73a232 started at 2019-06-12 21:27:15 +0000 UTC (0+1 container statuses recorded)
Jun 12 21:27:21.424: INFO:  Container nginx ready: false, restart count 0
W0612 21:27:21.427303      22 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jun 12 21:27:21.985: INFO: 
Latency metrics for node c3779447ed34
Jun 12 21:27:21.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2009" for this suite.
Jun 12 21:27:43.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jun 12 21:27:44.095: INFO: namespace init-container-2009 deletion completed in 22.107123681s

• Failure [65.477 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.14.1-beta.0.44+b7394102d6ef77/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
  should not start app containers if init containers fail on a RestartAlways pod [Conformance] [It]
  /workspace/anago-v1.14.1-beta.0.44+b7394102d6ef77/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692

  Expected
      <*errors.errorString | 0xc0002dc550>: {
          s: "watch closed before UntilWithoutRetry timeout",
      }
  to be nil

  /workspace/anago-v1.14.1-beta.0.44+b7394102d6ef77/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:291
erikwilson commented 5 years ago

Fixed with #615

galal-hussein commented 5 years ago

k3s version: v0.7.0-rc7

I was able to run sonobouy e2e conformance test without any failure on k3s v0.7.0-rc7 cluster:

SSSSSSSSSSSSSSSSSJul 15 13:32:45.704: INFO: Running AfterSuite actions on all nodes
Jul 15 13:32:45.704: INFO: Running AfterSuite actions on node 1
Jul 15 13:32:45.704: INFO: Skipping dumping logs from cluster

Ran 203 of 3585 Specs in 5189.414 seconds
SUCCESS! -- 203 Passed | 0 Failed | 0 Pending | 3382 Skipped PASS

Ginkgo ran 1 suite in 1h26m30.527775107s
Test Suite Passed