isovalent / olm-for-cilium

OpenShift Operator Lifecycle Manager for Cilium
Other
6 stars 5 forks source link

Add Cilium v1.11.18 #11

Closed qmonnet closed 1 year ago

qmonnet commented 1 year ago

Generated with scripts/add-release.sh $RELEASE, following the steps at https://github.com/isovalent/cilium-ee-olm/issues/118.

Cc @michi-covalent

qmonnet commented 1 year ago

Expected failures are failing. The rest is passing. All good, all good once more.

Failing tests:                                                                                                                                                                                                                                                                              

[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]
results.txt ``` started: 0/1/67 "[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/2/67 "[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/3/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/4/67 "[sig-network] Services should test the lifecycle of an Endpoint [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/5/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/6/67 "[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/7/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/8/67 "[sig-network] DNS should provide DNS for pods for Hostname [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/9/67 "[sig-network] DNS should provide DNS for pods for Subdomain [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/10/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2.3s) 2023-06-19T19:02:06 "[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/11/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2.5s) 2023-06-19T19:02:06 "[sig-network] Services should test the lifecycle of an Endpoint [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/12/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (29.4s) 2023-06-19T19:02:33 "[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/13/67 "[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (29.9s) 2023-06-19T19:02:33 "[sig-network] DNS should provide DNS for pods for Hostname [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/14/67 "[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (31.1s) 2023-06-19T19:02:34 "[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/15/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (36s) 2023-06-19T19:02:39 "[sig-network] DNS should provide DNS for pods for Subdomain [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/16/67 "[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (10.5s) 2023-06-19T19:02:43 "[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/17/67 "[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (10.7s) 2023-06-19T19:02:50 "[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/18/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" Jun 19 19:02:06.460: INFO: Enabling in-tree volume drivers [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/test.go:58 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] set up framework | framework.go:178 STEP: Creating a kubernetes client 06/19/23 19:02:07.389 STEP: Building a namespace api object, basename network-policy 06/19/23 19:02:07.391 Jun 19 19:02:07.445: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace 06/19/23 19:02:07.625 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/19/23 19:02:07.63 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:72 [BeforeEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:78 STEP: Creating a simple server that serves on port 80 and 81. 06/19/23 19:02:07.636 STEP: Creating a server pod server in namespace e2e-network-policy-6770 06/19/23 19:02:07.636 W0619 19:02:07.662404 546 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "server-container-80", "server-container-81" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "server-container-80", "server-container-81" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "server-container-80", "server-container-81" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "server-container-80", "server-container-81" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:02:07.662: INFO: Created pod server-x5pkq STEP: Creating a service svc-server for pod server in namespace e2e-network-policy-6770 06/19/23 19:02:07.662 Jun 19 19:02:07.695: INFO: Created service svc-server STEP: Waiting for pod ready 06/19/23 19:02:07.695 Jun 19 19:02:07.695: INFO: Waiting up to 5m0s for pod "server-x5pkq" in namespace "e2e-network-policy-6770" to be "running and ready" Jun 19 19:02:07.710: INFO: Pod "server-x5pkq": Phase="Pending", Reason="", readiness=false. Elapsed: 15.162519ms Jun 19 19:02:07.710: INFO: The phase of Pod server-x5pkq is Pending, waiting for it to be Running (with Ready = true) Jun 19 19:02:09.717: INFO: Pod "server-x5pkq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022355135s Jun 19 19:02:09.717: INFO: The phase of Pod server-x5pkq is Pending, waiting for it to be Running (with Ready = true) Jun 19 19:02:11.719: INFO: Pod "server-x5pkq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024456982s Jun 19 19:02:11.719: INFO: The phase of Pod server-x5pkq is Pending, waiting for it to be Running (with Ready = true) Jun 19 19:02:13.717: INFO: Pod "server-x5pkq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022421727s Jun 19 19:02:13.717: INFO: The phase of Pod server-x5pkq is Pending, waiting for it to be Running (with Ready = true) Jun 19 19:02:15.725: INFO: Pod "server-x5pkq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.030163398s Jun 19 19:02:15.725: INFO: The phase of Pod server-x5pkq is Pending, waiting for it to be Running (with Ready = true) Jun 19 19:02:17.723: INFO: Pod "server-x5pkq": Phase="Running", Reason="", readiness=true. Elapsed: 10.027669845s Jun 19 19:02:17.723: INFO: The phase of Pod server-x5pkq is Running (Ready = true) Jun 19 19:02:17.723: INFO: Pod "server-x5pkq" satisfied condition "running and ready" STEP: Testing pods can connect to both ports when no policy is present. 06/19/23 19:02:17.723 STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. 06/19/23 19:02:17.723 W0619 19:02:17.732918 546 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:02:17.733: INFO: Waiting for client-can-connect-80-wbkwb to complete. Jun 19 19:02:17.733: INFO: Waiting up to 3m0s for pod "client-can-connect-80-wbkwb" in namespace "e2e-network-policy-6770" to be "completed" Jun 19 19:02:17.752: INFO: Pod "client-can-connect-80-wbkwb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.44054ms Jun 19 19:02:19.760: INFO: Pod "client-can-connect-80-wbkwb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027962976s Jun 19 19:02:22.551: INFO: Pod "client-can-connect-80-wbkwb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.818510588s Jun 19 19:02:24.949: INFO: Pod "client-can-connect-80-wbkwb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.216508225s Jun 19 19:02:24.949: INFO: Pod "client-can-connect-80-wbkwb" satisfied condition "completed" Jun 19 19:02:24.949: INFO: Waiting for client-can-connect-80-wbkwb to complete. Jun 19 19:02:24.949: INFO: Waiting up to 5m0s for pod "client-can-connect-80-wbkwb" in namespace "e2e-network-policy-6770" to be "Succeeded or Failed" Jun 19 19:02:25.394: INFO: Pod "client-can-connect-80-wbkwb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 445.016213ms STEP: Saw pod success 06/19/23 19:02:25.394 Jun 19 19:02:25.394: INFO: Pod "client-can-connect-80-wbkwb" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-80-wbkwb 06/19/23 19:02:25.394 STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. 06/19/23 19:02:30.374 W0619 19:02:30.415999 546 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:02:30.416: INFO: Waiting for client-can-connect-81-f96cs to complete. Jun 19 19:02:30.416: INFO: Waiting up to 3m0s for pod "client-can-connect-81-f96cs" in namespace "e2e-network-policy-6770" to be "completed" Jun 19 19:02:30.441: INFO: Pod "client-can-connect-81-f96cs": Phase="Pending", Reason="", readiness=false. Elapsed: 25.566904ms Jun 19 19:02:32.447: INFO: Pod "client-can-connect-81-f96cs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031129343s Jun 19 19:02:34.454: INFO: Pod "client-can-connect-81-f96cs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038011053s Jun 19 19:02:36.454: INFO: Pod "client-can-connect-81-f96cs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038232827s Jun 19 19:02:36.454: INFO: Pod "client-can-connect-81-f96cs" satisfied condition "completed" Jun 19 19:02:36.454: INFO: Waiting for client-can-connect-81-f96cs to complete. Jun 19 19:02:36.454: INFO: Waiting up to 5m0s for pod "client-can-connect-81-f96cs" in namespace "e2e-network-policy-6770" to be "Succeeded or Failed" Jun 19 19:02:36.460: INFO: Pod "client-can-connect-81-f96cs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.356523ms STEP: Saw pod success 06/19/23 19:02:36.46 Jun 19 19:02:36.461: INFO: Pod "client-can-connect-81-f96cs" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-81-f96cs 06/19/23 19:02:36.461 [It] should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1689 STEP: getting the state of the sctp module on nodes 06/19/23 19:02:36.795 Jun 19 19:02:36.840: INFO: Executing cmd "lsmod | grep sctp" on node worker01 W0619 19:02:36.869828 546 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), privileged (container "agnhost-container" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "rootfs" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:02:36.869: INFO: Waiting up to 5m0s for pod "hostexec-worker01-vdmr8" in namespace "e2e-network-policy-6770" to be "running" Jun 19 19:02:36.932: INFO: Pod "hostexec-worker01-vdmr8": Phase="Pending", Reason="", readiness=false. Elapsed: 62.721874ms Jun 19 19:02:38.942: INFO: Pod "hostexec-worker01-vdmr8": Phase="Running", Reason="", readiness=true. Elapsed: 2.072217194s Jun 19 19:02:38.942: INFO: Pod "hostexec-worker01-vdmr8" satisfied condition "running" Jun 19 19:02:38.942: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:e2e-network-policy-6770 PodName:hostexec-worker01-vdmr8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 19 19:02:38.943: INFO: ExecWithOptions: Clientset creation Jun 19 19:02:38.943: INFO: ExecWithOptions: execute(POST https://api.ocp1.k8s.work:6443/api/v1/namespaces/e2e-network-policy-6770/pods/hostexec-worker01-vdmr8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jun 19 19:02:39.093: INFO: exec worker01: command: lsmod | grep sctp Jun 19 19:02:39.093: INFO: exec worker01: stdout: "" Jun 19 19:02:39.093: INFO: exec worker01: stderr: "" Jun 19 19:02:39.093: INFO: exec worker01: exit code: 0 Jun 19 19:02:39.093: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1 Jun 19 19:02:39.093: INFO: the sctp module is not loaded on node: worker01 Jun 19 19:02:39.093: INFO: Executing cmd "lsmod | grep sctp" on node worker02 W0619 19:02:39.119024 546 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), privileged (container "agnhost-container" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "rootfs" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:02:39.119: INFO: Waiting up to 5m0s for pod "hostexec-worker02-tm29l" in namespace "e2e-network-policy-6770" to be "running" Jun 19 19:02:39.125: INFO: Pod "hostexec-worker02-tm29l": Phase="Pending", Reason="", readiness=false. Elapsed: 5.820541ms Jun 19 19:02:41.136: INFO: Pod "hostexec-worker02-tm29l": Phase="Running", Reason="", readiness=true. Elapsed: 2.017102643s Jun 19 19:02:41.136: INFO: Pod "hostexec-worker02-tm29l" satisfied condition "running" Jun 19 19:02:41.136: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:e2e-network-policy-6770 PodName:hostexec-worker02-tm29l ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 19 19:02:41.138: INFO: ExecWithOptions: Clientset creation Jun 19 19:02:41.138: INFO: ExecWithOptions: execute(POST https://api.ocp1.k8s.work:6443/api/v1/namespaces/e2e-network-policy-6770/pods/hostexec-worker02-tm29l/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jun 19 19:02:41.354: INFO: exec worker02: command: lsmod | grep sctp Jun 19 19:02:41.354: INFO: exec worker02: stdout: "" Jun 19 19:02:41.354: INFO: exec worker02: stderr: "" Jun 19 19:02:41.354: INFO: exec worker02: exit code: 0 Jun 19 19:02:41.354: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1 Jun 19 19:02:41.354: INFO: the sctp module is not loaded on node: worker02 Jun 19 19:02:41.354: INFO: Executing cmd "lsmod | grep sctp" on node worker03 W0619 19:02:41.365943 546 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), privileged (container "agnhost-container" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "rootfs" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:02:41.366: INFO: Waiting up to 5m0s for pod "hostexec-worker03-r4cxq" in namespace "e2e-network-policy-6770" to be "running" Jun 19 19:02:41.380: INFO: Pod "hostexec-worker03-r4cxq": Phase="Pending", Reason="", readiness=false. Elapsed: 14.0826ms Jun 19 19:02:43.386: INFO: Pod "hostexec-worker03-r4cxq": Phase="Running", Reason="", readiness=true. Elapsed: 2.020404864s Jun 19 19:02:43.386: INFO: Pod "hostexec-worker03-r4cxq" satisfied condition "running" Jun 19 19:02:43.386: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:e2e-network-policy-6770 PodName:hostexec-worker03-r4cxq ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 19 19:02:43.387: INFO: ExecWithOptions: Clientset creation Jun 19 19:02:43.387: INFO: ExecWithOptions: execute(POST https://api.ocp1.k8s.work:6443/api/v1/namespaces/e2e-network-policy-6770/pods/hostexec-worker03-r4cxq/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jun 19 19:02:43.517: INFO: exec worker03: command: lsmod | grep sctp Jun 19 19:02:43.517: INFO: exec worker03: stdout: "" Jun 19 19:02:43.517: INFO: exec worker03: stderr: "" Jun 19 19:02:43.517: INFO: exec worker03: exit code: 0 Jun 19 19:02:43.517: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1 Jun 19 19:02:43.517: INFO: the sctp module is not loaded on node: worker03 STEP: Deleting pod hostexec-worker03-r4cxq in namespace e2e-network-policy-6770 06/19/23 19:02:43.517 STEP: Deleting pod hostexec-worker01-vdmr8 in namespace e2e-network-policy-6770 06/19/23 19:02:43.544 STEP: Deleting pod hostexec-worker02-tm29l in namespace e2e-network-policy-6770 06/19/23 19:02:43.567 STEP: Creating a network policy for the server which allows traffic only via SCTP on port 80. 06/19/23 19:02:43.589 STEP: Testing pods cannot connect on port 80 anymore when not using SCTP as protocol. 06/19/23 19:02:43.609 STEP: Creating client pod client-a that should not be able to connect to svc-server. 06/19/23 19:02:43.609 W0619 19:02:43.625232 546 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:02:43.625: INFO: Waiting for client-a-8p5lq to complete. Jun 19 19:02:43.625: INFO: Waiting up to 5m0s for pod "client-a-8p5lq" in namespace "e2e-network-policy-6770" to be "Succeeded or Failed" Jun 19 19:02:43.642: INFO: Pod "client-a-8p5lq": Phase="Pending", Reason="", readiness=false. Elapsed: 17.142483ms Jun 19 19:02:45.650: INFO: Pod "client-a-8p5lq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024776229s Jun 19 19:02:48.886: INFO: Pod "client-a-8p5lq": Phase="Pending", Reason="", readiness=false. Elapsed: 5.261312311s Jun 19 19:02:49.654: INFO: Pod "client-a-8p5lq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028953489s STEP: Saw pod success 06/19/23 19:02:49.654 Jun 19 19:02:49.654: INFO: Pod "client-a-8p5lq" satisfied condition "Succeeded or Failed" Jun 19 19:02:49.672: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-6770 describe po client-a-8p5lq' Jun 19 19:02:49.909: INFO: stderr: "" Jun 19 19:02:49.909: INFO: stdout: "Name: client-a-8p5lq\nNamespace: e2e-network-policy-6770\nPriority: 0\nService Account: default\nNode: worker03/192.168.200.33\nStart Time: Mon, 19 Jun 2023 19:02:43 +0000\nLabels: pod-name=client-a\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::528\",\n \"10.128.11.110\"\n ],\n \"mac\": \"de:47:f0:65:dc:d0\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::528\",\n \"10.128.11.110\"\n ],\n \"mac\": \"de:47:f0:65:dc:d0\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Succeeded\nIP: 10.128.11.110\nIPs:\n IP: 10.128.11.110\n IP: fd00::528\nContainers:\n client:\n Container ID: cri-o://0a54ac5cf468bcd913501ad5f13feb933a45b39f9fa9839cca778c66ff639378\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: \n Host Port: \n Command:\n /bin/sh\n Args:\n -c\n for i in $(seq 1 5); do /agnhost connect 172.30.14.246:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1\n State: Terminated\n Reason: Completed\n Exit Code: 0\n Started: Mon, 19 Jun 2023 19:02:44 +0000\n Finished: Mon, 19 Jun 2023 19:02:45 +0000\n Ready: False\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-scsxl (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-scsxl:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned e2e-network-policy-6770/client-a-8p5lq to worker03 by cp01\n Normal AddedInterface 5s multus Add eth0 [fd00::528/128 10.128.11.110/32] from cilium\n Normal Pulled 5s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 5s kubelet Created container client\n Normal Started 4s kubelet Started container client\n" Jun 19 19:02:49.909: INFO: Output of kubectl describe client-a-8p5lq: Name: client-a-8p5lq Namespace: e2e-network-policy-6770 Priority: 0 Service Account: default Node: worker03/192.168.200.33 Start Time: Mon, 19 Jun 2023 19:02:43 +0000 Labels: pod-name=client-a Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::528", "10.128.11.110" ], "mac": "de:47:f0:65:dc:d0", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::528", "10.128.11.110" ], "mac": "de:47:f0:65:dc:d0", "default": true, "dns": {} }] Status: Succeeded IP: 10.128.11.110 IPs: IP: 10.128.11.110 IP: fd00::528 Containers: client: Container ID: cri-o://0a54ac5cf468bcd913501ad5f13feb933a45b39f9fa9839cca778c66ff639378 Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: Host Port: Command: /bin/sh Args: -c for i in $(seq 1 5); do /agnhost connect 172.30.14.246:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1 State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 19 Jun 2023 19:02:44 +0000 Finished: Mon, 19 Jun 2023 19:02:45 +0000 Ready: False Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-scsxl (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-scsxl: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6s default-scheduler Successfully assigned e2e-network-policy-6770/client-a-8p5lq to worker03 by cp01 Normal AddedInterface 5s multus Add eth0 [fd00::528/128 10.128.11.110/32] from cilium Normal Pulled 5s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 5s kubelet Created container client Normal Started 4s kubelet Started container client Jun 19 19:02:49.909: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-6770 logs client-a-8p5lq --tail=100' Jun 19 19:02:50.115: INFO: stderr: "" Jun 19 19:02:50.115: INFO: stdout: "" Jun 19 19:02:50.115: INFO: Last 100 log lines of client-a-8p5lq: Jun 19 19:02:50.115: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-6770 describe po server-x5pkq' Jun 19 19:02:50.316: INFO: stderr: "" Jun 19 19:02:50.316: INFO: stdout: "Name: server-x5pkq\nNamespace: e2e-network-policy-6770\nPriority: 0\nService Account: default\nNode: worker02/192.168.200.32\nStart Time: Mon, 19 Jun 2023 19:02:07 +0000\nLabels: pod-name=server\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::4cd\",\n \"10.128.9.230\"\n ],\n \"mac\": \"b2:d7:62:13:16:0d\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::4cd\",\n \"10.128.9.230\"\n ],\n \"mac\": \"b2:d7:62:13:16:0d\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.9.230\nIPs:\n IP: 10.128.9.230\n IP: fd00::4cd\nContainers:\n server-container-80:\n Container ID: cri-o://85bd1316935bcb950cb50b43e65f3e79408c662fe82671239365fc5d05f2da7f\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 19 Jun 2023 19:02:14 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dmx66 (ro)\n server-container-81:\n Container ID: cri-o://ab6fb11f667286bd8411bf087d2b63e75f7d57a6e571b6af60b811918c2a4cff\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 81/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 19 Jun 2023 19:02:14 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_81: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dmx66 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-dmx66:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 42s default-scheduler Successfully assigned e2e-network-policy-6770/server-x5pkq to worker02 by cp01\n Normal AddedInterface 41s multus Add eth0 [fd00::4cd/128 10.128.9.230/32] from cilium\n Normal Pulling 41s kubelet Pulling image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\"\n Normal Pulled 36s kubelet Successfully pulled image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" in 4.610800108s\n Normal Created 36s kubelet Created container server-container-80\n Normal Started 36s kubelet Started container server-container-80\n Normal Pulled 36s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 36s kubelet Created container server-container-81\n Normal Started 36s kubelet Started container server-container-81\n" Jun 19 19:02:50.316: INFO: Output of kubectl describe server-x5pkq: Name: server-x5pkq Namespace: e2e-network-policy-6770 Priority: 0 Service Account: default Node: worker02/192.168.200.32 Start Time: Mon, 19 Jun 2023 19:02:07 +0000 Labels: pod-name=server Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::4cd", "10.128.9.230" ], "mac": "b2:d7:62:13:16:0d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::4cd", "10.128.9.230" ], "mac": "b2:d7:62:13:16:0d", "default": true, "dns": {} }] Status: Running IP: 10.128.9.230 IPs: IP: 10.128.9.230 IP: fd00::4cd Containers: server-container-80: Container ID: cri-o://85bd1316935bcb950cb50b43e65f3e79408c662fe82671239365fc5d05f2da7f Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 80/TCP Host Port: 0/TCP Args: porter State: Running Started: Mon, 19 Jun 2023 19:02:14 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_80: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dmx66 (ro) server-container-81: Container ID: cri-o://ab6fb11f667286bd8411bf087d2b63e75f7d57a6e571b6af60b811918c2a4cff Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 81/TCP Host Port: 0/TCP Args: porter State: Running Started: Mon, 19 Jun 2023 19:02:14 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_81: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dmx66 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-dmx66: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 42s default-scheduler Successfully assigned e2e-network-policy-6770/server-x5pkq to worker02 by cp01 Normal AddedInterface 41s multus Add eth0 [fd00::4cd/128 10.128.9.230/32] from cilium Normal Pulling 41s kubelet Pulling image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" Normal Pulled 36s kubelet Successfully pulled image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" in 4.610800108s Normal Created 36s kubelet Created container server-container-80 Normal Started 36s kubelet Started container server-container-80 Normal Pulled 36s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 36s kubelet Created container server-container-81 Normal Started 36s kubelet Started container server-container-81 Jun 19 19:02:50.317: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-6770 logs server-x5pkq --tail=100' Jun 19 19:02:50.556: INFO: stderr: "Defaulted container \"server-container-80\" out of: server-container-80, server-container-81\n" Jun 19 19:02:50.556: INFO: stdout: "" Jun 19 19:02:50.556: INFO: Last 100 log lines of server-x5pkq: Jun 19 19:02:50.591: FAIL: Pod client-a-8p5lq should not be able to connect to service svc-server, but was able to connect. Pod logs: Current NetworkPolicies: [{{ } {allow-only-sctp-ingress-on-port-80 e2e-network-policy-6770 913cb3c7-e3a1-46a8-b957-178dd3618c61 83859 1 2023-06-19 19:02:43 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-19 19:02:43 +0000 UTC FieldsV1 {"f:spec":{"f:ingress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:server] []} [{[{0xc00235ff10 80 }] []}] [] [Ingress]} {[]}}] Pods: [Pod: client-a-8p5lq, Status: &PodStatus{Phase:Succeeded,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:43 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:43 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:43 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.11.110,StartTime:2023-06-19 19:02:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-19 19:02:44 +0000 UTC,FinishedAt:2023-06-19 19:02:45 +0000 UTC,ContainerID:cri-o://0a54ac5cf468bcd913501ad5f13feb933a45b39f9fa9839cca778c66ff639378,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://0a54ac5cf468bcd913501ad5f13feb933a45b39f9fa9839cca778c66ff639378,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.11.110,},PodIP{IP:fd00::528,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-x5pkq, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.9.230,StartTime:2023-06-19 19:02:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 19:02:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://85bd1316935bcb950cb50b43e65f3e79408c662fe82671239365fc5d05f2da7f,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 19:02:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://ab6fb11f667286bd8411bf087d2b63e75f7d57a6e571b6af60b811918c2a4cff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.9.230,},PodIP{IP:fd00::4cd,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Full Stack Trace k8s.io/kubernetes/test/e2e/network/netpol.checkNoConnectivity(0xc001abeb40, 0xc001c486e0, 0xc0079c7680, 0xc00741e500) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1957 +0x25a k8s.io/kubernetes/test/e2e/network/netpol.testCannotConnectProtocol(0xc001abeb40, 0xc001c486e0, {0x8a33123, 0x8}, 0xc00741e500, 0x0?, {0x8a24aec, 0x3}) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1926 +0x1be k8s.io/kubernetes/test/e2e/network/netpol.testCannotConnect(...) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1901 k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.31() k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1721 +0x3d3 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2e8b77e, 0xc000811380}) github.com/onsi/ginkgo/v2@v2.4.0/internal/node.go:449 +0x1b github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:757 +0x98 created by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:745 +0xe3d STEP: Cleaning up the pod client-a-8p5lq 06/19/23 19:02:50.592 STEP: Cleaning up the policy. 06/19/23 19:02:50.617 [AfterEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:96 STEP: Cleaning up the server. 06/19/23 19:02:50.645 STEP: Cleaning up the server's service. 06/19/23 19:02:50.666 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] dump namespaces | framework.go:196 STEP: dump namespace information after failure 06/19/23 19:02:50.742 STEP: Collecting events from namespace "e2e-network-policy-6770". 06/19/23 19:02:50.742 STEP: Found 41 events. 06/19/23 19:02:50.764 Jun 19 19:02:50.764: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-8p5lq: { } Scheduled: Successfully assigned e2e-network-policy-6770/client-a-8p5lq to worker03 by cp01 Jun 19 19:02:50.764: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-wbkwb: { } Scheduled: Successfully assigned e2e-network-policy-6770/client-can-connect-80-wbkwb to worker02 by cp01 Jun 19 19:02:50.764: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-f96cs: { } Scheduled: Successfully assigned e2e-network-policy-6770/client-can-connect-81-f96cs to worker03 by cp01 Jun 19 19:02:50.764: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-worker01-vdmr8: { } Scheduled: Successfully assigned e2e-network-policy-6770/hostexec-worker01-vdmr8 to worker01 by cp01 Jun 19 19:02:50.764: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-worker02-tm29l: { } Scheduled: Successfully assigned e2e-network-policy-6770/hostexec-worker02-tm29l to worker02 by cp01 Jun 19 19:02:50.764: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-worker03-r4cxq: { } Scheduled: Successfully assigned e2e-network-policy-6770/hostexec-worker03-r4cxq to worker03 by cp01 Jun 19 19:02:50.764: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-x5pkq: { } Scheduled: Successfully assigned e2e-network-policy-6770/server-x5pkq to worker02 by cp01 Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:09 +0000 UTC - event for server-x5pkq: {kubelet worker02} Pulling: Pulling image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:09 +0000 UTC - event for server-x5pkq: {multus } AddedInterface: Add eth0 [fd00::4cd/128 10.128.9.230/32] from cilium Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:14 +0000 UTC - event for server-x5pkq: {kubelet worker02} Pulled: Successfully pulled image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" in 4.610800108s Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:14 +0000 UTC - event for server-x5pkq: {kubelet worker02} Created: Created container server-container-81 Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:14 +0000 UTC - event for server-x5pkq: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:14 +0000 UTC - event for server-x5pkq: {kubelet worker02} Started: Started container server-container-80 Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:14 +0000 UTC - event for server-x5pkq: {kubelet worker02} Created: Created container server-container-80 Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:14 +0000 UTC - event for server-x5pkq: {kubelet worker02} Started: Started container server-container-81 Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:18 +0000 UTC - event for client-can-connect-80-wbkwb: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:18 +0000 UTC - event for client-can-connect-80-wbkwb: {kubelet worker02} Created: Created container client Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:18 +0000 UTC - event for client-can-connect-80-wbkwb: {multus } AddedInterface: Add eth0 [fd00::45d/128 10.128.8.173/32] from cilium Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:19 +0000 UTC - event for client-can-connect-80-wbkwb: {kubelet worker02} Started: Started container client Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:32 +0000 UTC - event for client-can-connect-81-f96cs: {kubelet worker03} Started: Started container client Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:32 +0000 UTC - event for client-can-connect-81-f96cs: {kubelet worker03} Created: Created container client Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:32 +0000 UTC - event for client-can-connect-81-f96cs: {multus } AddedInterface: Add eth0 [fd00::50e/128 10.128.11.87/32] from cilium Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:32 +0000 UTC - event for client-can-connect-81-f96cs: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:37 +0000 UTC - event for hostexec-worker01-vdmr8: {kubelet worker01} Created: Created container agnhost-container Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:37 +0000 UTC - event for hostexec-worker01-vdmr8: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:37 +0000 UTC - event for hostexec-worker01-vdmr8: {kubelet worker01} Started: Started container agnhost-container Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:39 +0000 UTC - event for hostexec-worker02-tm29l: {kubelet worker02} Started: Started container agnhost-container Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:39 +0000 UTC - event for hostexec-worker02-tm29l: {kubelet worker02} Created: Created container agnhost-container Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:39 +0000 UTC - event for hostexec-worker02-tm29l: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:41 +0000 UTC - event for hostexec-worker03-r4cxq: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:42 +0000 UTC - event for hostexec-worker03-r4cxq: {kubelet worker03} Started: Started container agnhost-container Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:42 +0000 UTC - event for hostexec-worker03-r4cxq: {kubelet worker03} Created: Created container agnhost-container Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:43 +0000 UTC - event for hostexec-worker01-vdmr8: {kubelet worker01} Killing: Stopping container agnhost-container Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:43 +0000 UTC - event for hostexec-worker02-tm29l: {kubelet worker02} Killing: Stopping container agnhost-container Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:44 +0000 UTC - event for client-a-8p5lq: {multus } AddedInterface: Add eth0 [fd00::528/128 10.128.11.110/32] from cilium Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:44 +0000 UTC - event for client-a-8p5lq: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:44 +0000 UTC - event for client-a-8p5lq: {kubelet worker03} Created: Created container client Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:44 +0000 UTC - event for hostexec-worker03-r4cxq: {kubelet worker03} Killing: Stopping container agnhost-container Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:45 +0000 UTC - event for client-a-8p5lq: {kubelet worker03} Started: Started container client Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:50 +0000 UTC - event for server-x5pkq: {kubelet worker02} Killing: Stopping container server-container-80 Jun 19 19:02:50.764: INFO: At 2023-06-19 19:02:50 +0000 UTC - event for server-x5pkq: {kubelet worker02} Killing: Stopping container server-container-81 Jun 19 19:02:50.781: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 19:02:50.781: INFO: server-x5pkq worker02 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:02:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:02:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:02:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:02:07 +0000 UTC }] Jun 19 19:02:50.781: INFO: Jun 19 19:02:50.792: INFO: skipping dumping cluster info - cluster too large [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] tear down framework | framework.go:193 STEP: Destroying namespace "e2e-network-policy-6770" for this suite. 06/19/23 19:02:50.792 fail [k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1957]: Jun 19 19:02:50.591: Pod client-a-8p5lq should not be able to connect to service svc-server, but was able to connect. Pod logs: Current NetworkPolicies: [{{ } {allow-only-sctp-ingress-on-port-80 e2e-network-policy-6770 913cb3c7-e3a1-46a8-b957-178dd3618c61 83859 1 2023-06-19 19:02:43 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-19 19:02:43 +0000 UTC FieldsV1 {"f:spec":{"f:ingress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:server] []} [{[{0xc00235ff10 80 }] []}] [] [Ingress]} {[]}}] Pods: [Pod: client-a-8p5lq, Status: &PodStatus{Phase:Succeeded,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:43 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:43 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:43 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.11.110,StartTime:2023-06-19 19:02:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-19 19:02:44 +0000 UTC,FinishedAt:2023-06-19 19:02:45 +0000 UTC,ContainerID:cri-o://0a54ac5cf468bcd913501ad5f13feb933a45b39f9fa9839cca778c66ff639378,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://0a54ac5cf468bcd913501ad5f13feb933a45b39f9fa9839cca778c66ff639378,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.11.110,},PodIP{IP:fd00::528,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-x5pkq, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:02:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.9.230,StartTime:2023-06-19 19:02:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 19:02:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://85bd1316935bcb950cb50b43e65f3e79408c662fe82671239365fc5d05f2da7f,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 19:02:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://ab6fb11f667286bd8411bf087d2b63e75f7d57a6e571b6af60b811918c2a4cff,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.9.230,},PodIP{IP:fd00::4cd,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Ginkgo exit error 1: exit with code 1 failed: (44.6s) 2023-06-19T19:02:50 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/19/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (13.8s) 2023-06-19T19:02:57 "[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/20/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (36.7s) 2023-06-19T19:03:10 "[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/21/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (24s) 2023-06-19T19:03:14 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/22/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m26s) 2023-06-19T19:03:32 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/23/67 "[sig-network] Services should serve multiport endpoints from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m30s) 2023-06-19T19:03:33 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/24/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m39s) 2023-06-19T19:03:42 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/25/67 "[sig-network] Services should find a service from listing all namespaces [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.2s) 2023-06-19T19:03:43 "[sig-network] Services should find a service from listing all namespaces [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/26/67 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (12.6s) 2023-06-19T19:03:44 "[sig-network] Services should serve multiport endpoints from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/27/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (3.5s) 2023-06-19T19:03:47 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/28/67 "[sig-network] Services should complete a service status lifecycle [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.4s) 2023-06-19T19:03:48 "[sig-network] Services should complete a service status lifecycle [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/29/67 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (25.8s) 2023-06-19T19:04:14 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/30/67 "[sig-network] Ingress API should support creating Ingress API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.4s) 2023-06-19T19:04:15 "[sig-network] Ingress API should support creating Ingress API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/31/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m8s) 2023-06-19T19:04:18 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/32/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m12s) 2023-06-19T19:04:26 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/33/67 "[sig-network] DNS should provide DNS for the cluster [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (2m26s) 2023-06-19T19:04:29 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/34/67 "[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (5.5s) 2023-06-19T19:04:31 "[sig-network] DNS should provide DNS for the cluster [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/35/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m6s) 2023-06-19T19:04:39 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/36/67 "[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] [Serial:Self] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (21.7s) 2023-06-19T19:04:39 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/37/67 "[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (11s) 2023-06-19T19:04:40 "[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/38/67 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (6.4s) 2023-06-19T19:04:46 "[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/39/67 "[sig-network] NetworkPolicy API should support creating NetworkPolicy API operations [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1.4s) 2023-06-19T19:04:47 "[sig-network] NetworkPolicy API should support creating NetworkPolicy API operations [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/40/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m4s) 2023-06-19T19:04:54 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/41/67 "[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (16.9s) 2023-06-19T19:04:56 "[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] [Serial:Self] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/42/67 "[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m14s) 2023-06-19T19:04:58 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/43/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m27s) 2023-06-19T19:05:01 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/44/67 "[sig-network] Services should provide secure master service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.4s) 2023-06-19T19:05:02 "[sig-network] Services should provide secure master service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/45/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (9.9s) 2023-06-19T19:05:06 "[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/46/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (30.8s) 2023-06-19T19:05:11 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/47/67 "[sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (5.4s) 2023-06-19T19:05:16 "[sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/48/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m6s) 2023-06-19T19:05:21 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/49/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (42.8s) 2023-06-19T19:05:37 "[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/50/67 "[sig-network] IngressClass API should support creating IngressClass API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.5s) 2023-06-19T19:05:39 "[sig-network] IngressClass API should support creating IngressClass API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/51/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m8s) 2023-06-19T19:05:40 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/52/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (4m4s) 2023-06-19T19:06:07 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/53/67 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (3m10s) 2023-06-19T19:06:07 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/54/67 "[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.5s) 2023-06-19T19:06:09 "[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/55/67 "[sig-network] DNS should provide DNS for services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (3.5s) 2023-06-19T19:06:10 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/56/67 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m10s) 2023-06-19T19:06:12 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/57/67 "[sig-network] Services should be able to create a functioning NodePort service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (10.7s) 2023-06-19T19:06:19 "[sig-network] DNS should provide DNS for services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/58/67 "[sig-network] DNS should provide DNS for ExternalName services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (8.6s) 2023-06-19T19:06:21 "[sig-network] Services should be able to create a functioning NodePort service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/59/67 "[sig-network] Services should serve a basic endpoint from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m26s) 2023-06-19T19:06:24 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/60/67 "[sig-network] DNS should support configurable pod DNS nameservers [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (3.7s) 2023-06-19T19:06:28 "[sig-network] DNS should support configurable pod DNS nameservers [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/61/67 "[sig-network] Services should delete a collection of services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.4s) 2023-06-19T19:06:30 "[sig-network] Services should delete a collection of services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/62/67 "[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (13.7s) 2023-06-19T19:06:33 "[sig-network] DNS should provide DNS for ExternalName services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/63/67 "[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.3s) 2023-06-19T19:06:34 "[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/64/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (15.3s) 2023-06-19T19:06:36 "[sig-network] Services should serve a basic endpoint from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/65/67 "[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (27.8s) 2023-06-19T19:06:38 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/66/67 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (13.4s) 2023-06-19T19:06:43 "[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (7.3s) 2023-06-19T19:06:43 "[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m6s) 2023-06-19T19:06:45 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" Jun 19 19:04:47.859: INFO: Enabling in-tree volume drivers [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/test.go:58 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] set up framework | framework.go:178 STEP: Creating a kubernetes client 06/19/23 19:04:48.612 STEP: Building a namespace api object, basename network-policy 06/19/23 19:04:48.613 Jun 19 19:04:48.666: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace 06/19/23 19:04:48.808 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/19/23 19:04:48.812 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:72 [BeforeEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:78 STEP: Creating a simple server that serves on port 80 and 81. 06/19/23 19:04:48.816 STEP: Creating a server pod server in namespace e2e-network-policy-7704 06/19/23 19:04:48.816 W0619 19:04:48.838761 2569 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "server-container-80", "server-container-81" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "server-container-80", "server-container-81" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "server-container-80", "server-container-81" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "server-container-80", "server-container-81" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:04:48.838: INFO: Created pod server-rwvl2 STEP: Creating a service svc-server for pod server in namespace e2e-network-policy-7704 06/19/23 19:04:48.838 Jun 19 19:04:48.867: INFO: Created service svc-server STEP: Waiting for pod ready 06/19/23 19:04:48.867 Jun 19 19:04:48.868: INFO: Waiting up to 5m0s for pod "server-rwvl2" in namespace "e2e-network-policy-7704" to be "running and ready" Jun 19 19:04:48.880: INFO: Pod "server-rwvl2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.641416ms Jun 19 19:04:48.880: INFO: The phase of Pod server-rwvl2 is Pending, waiting for it to be Running (with Ready = true) Jun 19 19:04:50.890: INFO: Pod "server-rwvl2": Phase="Running", Reason="", readiness=true. Elapsed: 2.0224009s Jun 19 19:04:50.890: INFO: The phase of Pod server-rwvl2 is Running (Ready = true) Jun 19 19:04:50.890: INFO: Pod "server-rwvl2" satisfied condition "running and ready" STEP: Testing pods can connect to both ports when no policy is present. 06/19/23 19:04:50.89 STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. 06/19/23 19:04:50.89 W0619 19:04:50.902376 2569 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:04:50.902: INFO: Waiting for client-can-connect-80-rjwmz to complete. Jun 19 19:04:50.902: INFO: Waiting up to 3m0s for pod "client-can-connect-80-rjwmz" in namespace "e2e-network-policy-7704" to be "completed" Jun 19 19:04:50.906: INFO: Pod "client-can-connect-80-rjwmz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.411524ms Jun 19 19:04:52.912: INFO: Pod "client-can-connect-80-rjwmz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009895016s Jun 19 19:04:54.918: INFO: Pod "client-can-connect-80-rjwmz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015668528s Jun 19 19:04:56.914: INFO: Pod "client-can-connect-80-rjwmz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011500995s Jun 19 19:04:56.914: INFO: Pod "client-can-connect-80-rjwmz" satisfied condition "completed" Jun 19 19:04:56.914: INFO: Waiting for client-can-connect-80-rjwmz to complete. Jun 19 19:04:56.914: INFO: Waiting up to 5m0s for pod "client-can-connect-80-rjwmz" in namespace "e2e-network-policy-7704" to be "Succeeded or Failed" Jun 19 19:04:56.920: INFO: Pod "client-can-connect-80-rjwmz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.900076ms STEP: Saw pod success 06/19/23 19:04:56.92 Jun 19 19:04:56.920: INFO: Pod "client-can-connect-80-rjwmz" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-80-rjwmz 06/19/23 19:04:56.92 STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. 06/19/23 19:04:56.949 W0619 19:04:56.963586 2569 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:04:56.963: INFO: Waiting for client-can-connect-81-fq85g to complete. Jun 19 19:04:56.963: INFO: Waiting up to 3m0s for pod "client-can-connect-81-fq85g" in namespace "e2e-network-policy-7704" to be "completed" Jun 19 19:04:56.968: INFO: Pod "client-can-connect-81-fq85g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.334344ms Jun 19 19:04:58.975: INFO: Pod "client-can-connect-81-fq85g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011965907s Jun 19 19:05:00.975: INFO: Pod "client-can-connect-81-fq85g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011770208s Jun 19 19:05:02.974: INFO: Pod "client-can-connect-81-fq85g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.010462844s Jun 19 19:05:02.974: INFO: Pod "client-can-connect-81-fq85g" satisfied condition "completed" Jun 19 19:05:02.974: INFO: Waiting for client-can-connect-81-fq85g to complete. Jun 19 19:05:02.974: INFO: Waiting up to 5m0s for pod "client-can-connect-81-fq85g" in namespace "e2e-network-policy-7704" to be "Succeeded or Failed" Jun 19 19:05:02.979: INFO: Pod "client-can-connect-81-fq85g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.909532ms STEP: Saw pod success 06/19/23 19:05:02.979 Jun 19 19:05:02.979: INFO: Pod "client-can-connect-81-fq85g" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-81-fq85g 06/19/23 19:05:02.979 [It] should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1343 STEP: Creating a server pod pod-b in namespace e2e-network-policy-7704 06/19/23 19:05:03.018 W0619 19:05:03.029106 2569 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-b-container-80" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-b-container-80" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-b-container-80" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-b-container-80" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:05:03.029: INFO: Created pod pod-b-w85wl STEP: Creating a service svc-pod-b for pod pod-b in namespace e2e-network-policy-7704 06/19/23 19:05:03.029 Jun 19 19:05:03.068: INFO: Created service svc-pod-b STEP: Waiting for pod-b to be ready 06/19/23 19:05:03.068 Jun 19 19:05:03.068: INFO: Waiting up to 5m0s for pod "pod-b-w85wl" in namespace "e2e-network-policy-7704" to be "running and ready" Jun 19 19:05:03.083: INFO: Pod "pod-b-w85wl": Phase="Pending", Reason="", readiness=false. Elapsed: 15.172107ms Jun 19 19:05:03.083: INFO: The phase of Pod pod-b-w85wl is Pending, waiting for it to be Running (with Ready = true) Jun 19 19:05:05.091: INFO: Pod "pod-b-w85wl": Phase="Running", Reason="", readiness=true. Elapsed: 2.023138226s Jun 19 19:05:05.091: INFO: The phase of Pod pod-b-w85wl is Running (Ready = true) Jun 19 19:05:05.091: INFO: Pod "pod-b-w85wl" satisfied condition "running and ready" Jun 19 19:05:05.091: INFO: Waiting up to 5m0s for pod "pod-b-w85wl" in namespace "e2e-network-policy-7704" to be "running" Jun 19 19:05:05.100: INFO: Pod "pod-b-w85wl": Phase="Running", Reason="", readiness=true. Elapsed: 9.194119ms Jun 19 19:05:05.100: INFO: Pod "pod-b-w85wl" satisfied condition "running" STEP: Creating client-a which should be able to contact the server-b. 06/19/23 19:05:05.1 STEP: Creating client pod client-a that should successfully connect to svc-pod-b. 06/19/23 19:05:05.101 W0619 19:05:05.114792 2569 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:05:05.114: INFO: Waiting for client-a-7xhc4 to complete. Jun 19 19:05:05.114: INFO: Waiting up to 3m0s for pod "client-a-7xhc4" in namespace "e2e-network-policy-7704" to be "completed" Jun 19 19:05:05.122: INFO: Pod "client-a-7xhc4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.845769ms Jun 19 19:05:07.132: INFO: Pod "client-a-7xhc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017276121s Jun 19 19:05:09.127: INFO: Pod "client-a-7xhc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01300335s Jun 19 19:05:09.127: INFO: Pod "client-a-7xhc4" satisfied condition "completed" Jun 19 19:05:09.127: INFO: Waiting for client-a-7xhc4 to complete. Jun 19 19:05:09.127: INFO: Waiting up to 5m0s for pod "client-a-7xhc4" in namespace "e2e-network-policy-7704" to be "Succeeded or Failed" Jun 19 19:05:09.134: INFO: Pod "client-a-7xhc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.062273ms STEP: Saw pod success 06/19/23 19:05:09.134 Jun 19 19:05:09.134: INFO: Pod "client-a-7xhc4" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-a-7xhc4 06/19/23 19:05:09.134 STEP: Creating client-a which should not be able to contact the server-b. 06/19/23 19:05:09.176 STEP: Creating client pod client-a that should not be able to connect to svc-pod-b. 06/19/23 19:05:09.176 W0619 19:05:09.188565 2569 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:05:09.188: INFO: Waiting for client-a-tpzlh to complete. Jun 19 19:05:09.188: INFO: Waiting up to 5m0s for pod "client-a-tpzlh" in namespace "e2e-network-policy-7704" to be "Succeeded or Failed" Jun 19 19:05:09.192: INFO: Pod "client-a-tpzlh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139328ms Jun 19 19:05:11.210: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 2.021692692s Jun 19 19:05:13.198: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 4.010166978s Jun 19 19:05:15.198: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 6.009622434s Jun 19 19:05:17.203: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 8.014632392s Jun 19 19:05:19.201: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 10.012743913s Jun 19 19:05:21.199: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 12.010411415s Jun 19 19:05:23.202: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 14.013924141s Jun 19 19:05:25.200: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 16.011387264s Jun 19 19:05:27.198: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 18.009892899s Jun 19 19:05:29.197: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 20.008965632s Jun 19 19:05:31.201: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 22.012509689s Jun 19 19:05:33.198: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 24.009757862s Jun 19 19:05:35.200: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 26.011855776s Jun 19 19:05:37.199: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 28.011321908s Jun 19 19:05:39.199: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 30.011235356s Jun 19 19:05:41.209: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 32.020931502s Jun 19 19:05:43.208: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 34.019979903s Jun 19 19:05:45.201: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 36.012666604s Jun 19 19:05:47.202: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 38.014287814s Jun 19 19:05:49.206: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 40.018147533s Jun 19 19:05:51.200: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 42.011610569s Jun 19 19:05:53.198: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 44.010157705s Jun 19 19:05:55.200: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=true. Elapsed: 46.012144867s Jun 19 19:05:57.199: INFO: Pod "client-a-tpzlh": Phase="Running", Reason="", readiness=false. Elapsed: 48.010881997s Jun 19 19:05:59.198: INFO: Pod "client-a-tpzlh": Phase="Failed", Reason="", readiness=false. Elapsed: 50.010203128s STEP: Cleaning up the pod client-a-tpzlh 06/19/23 19:05:59.199 STEP: Creating client-a which should be able to contact the server. 06/19/23 19:05:59.218 STEP: Creating client pod client-a that should successfully connect to svc-server. 06/19/23 19:05:59.218 W0619 19:05:59.250068 2569 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:05:59.250: INFO: Waiting for client-a-2bsw6 to complete. Jun 19 19:05:59.250: INFO: Waiting up to 3m0s for pod "client-a-2bsw6" in namespace "e2e-network-policy-7704" to be "completed" Jun 19 19:05:59.256: INFO: Pod "client-a-2bsw6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.269547ms Jun 19 19:06:01.262: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 2.01239669s Jun 19 19:06:03.262: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 4.012225277s Jun 19 19:06:05.263: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 6.013230445s Jun 19 19:06:07.263: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 8.013207235s Jun 19 19:06:09.262: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 10.012158409s Jun 19 19:06:11.265: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 12.015771663s Jun 19 19:06:13.268: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 14.018751685s Jun 19 19:06:15.262: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 16.012747694s Jun 19 19:06:17.263: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 18.013593016s Jun 19 19:06:19.264: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 20.01432775s Jun 19 19:06:21.272: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 22.021835606s Jun 19 19:06:23.261: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 24.011031008s Jun 19 19:06:25.269: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 26.019081434s Jun 19 19:06:27.263: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 28.013042912s Jun 19 19:06:29.263: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 30.013201538s Jun 19 19:06:31.269: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 32.018870878s Jun 19 19:06:33.266: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 34.016375236s Jun 19 19:06:35.267: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 36.01739322s Jun 19 19:06:37.262: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 38.01253782s Jun 19 19:06:39.272: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 40.022495692s Jun 19 19:06:41.269: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 42.019674086s Jun 19 19:06:43.263: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 44.01295755s Jun 19 19:06:45.264: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=true. Elapsed: 46.013897435s Jun 19 19:06:47.263: INFO: Pod "client-a-2bsw6": Phase="Running", Reason="", readiness=false. Elapsed: 48.01362309s Jun 19 19:06:49.268: INFO: Pod "client-a-2bsw6": Phase="Failed", Reason="", readiness=false. Elapsed: 50.017898906s Jun 19 19:06:49.268: INFO: Pod "client-a-2bsw6" satisfied condition "completed" Jun 19 19:06:49.268: INFO: Waiting for client-a-2bsw6 to complete. Jun 19 19:06:49.268: INFO: Waiting up to 5m0s for pod "client-a-2bsw6" in namespace "e2e-network-policy-7704" to be "Succeeded or Failed" Jun 19 19:06:49.285: INFO: Pod "client-a-2bsw6": Phase="Failed", Reason="", readiness=false. Elapsed: 17.269706ms Jun 19 19:06:49.296: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-7704 describe po client-a-2bsw6' Jun 19 19:06:49.470: INFO: stderr: "" Jun 19 19:06:49.470: INFO: stdout: "Name: client-a-2bsw6\nNamespace: e2e-network-policy-7704\nPriority: 0\nService Account: default\nNode: worker02/192.168.200.32\nStart Time: Mon, 19 Jun 2023 19:05:59 +0000\nLabels: pod-name=client-a\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::4e4\",\n \"10.128.8.218\"\n ],\n \"mac\": \"4a:3a:f3:17:c5:07\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::4e4\",\n \"10.128.8.218\"\n ],\n \"mac\": \"4a:3a:f3:17:c5:07\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Failed\nIP: 10.128.8.218\nIPs:\n IP: 10.128.8.218\n IP: fd00::4e4\nContainers:\n client:\n Container ID: cri-o://1d83f8fc33d46d7607f18c02c4a83499cf21915cafd91d393393d8ef3a331f77\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: \n Host Port: \n Command:\n /bin/sh\n Args:\n -c\n for i in $(seq 1 5); do /agnhost connect 172.30.152.189:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1\n State: Terminated\n Reason: Error\n Exit Code: 1\n Started: Mon, 19 Jun 2023 19:06:00 +0000\n Finished: Mon, 19 Jun 2023 19:06:45 +0000\n Ready: False\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nv4jv (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-nv4jv:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-7704/client-a-2bsw6 to worker02 by cp01\n Normal AddedInterface 49s multus Add eth0 [fd00::4e4/128 10.128.8.218/32] from cilium\n Normal Pulled 49s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 49s kubelet Created container client\n Normal Started 49s kubelet Started container client\n" Jun 19 19:06:49.470: INFO: Output of kubectl describe client-a-2bsw6: Name: client-a-2bsw6 Namespace: e2e-network-policy-7704 Priority: 0 Service Account: default Node: worker02/192.168.200.32 Start Time: Mon, 19 Jun 2023 19:05:59 +0000 Labels: pod-name=client-a Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::4e4", "10.128.8.218" ], "mac": "4a:3a:f3:17:c5:07", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::4e4", "10.128.8.218" ], "mac": "4a:3a:f3:17:c5:07", "default": true, "dns": {} }] Status: Failed IP: 10.128.8.218 IPs: IP: 10.128.8.218 IP: fd00::4e4 Containers: client: Container ID: cri-o://1d83f8fc33d46d7607f18c02c4a83499cf21915cafd91d393393d8ef3a331f77 Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: Host Port: Command: /bin/sh Args: -c for i in $(seq 1 5); do /agnhost connect 172.30.152.189:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1 State: Terminated Reason: Error Exit Code: 1 Started: Mon, 19 Jun 2023 19:06:00 +0000 Finished: Mon, 19 Jun 2023 19:06:45 +0000 Ready: False Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nv4jv (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-nv4jv: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-7704/client-a-2bsw6 to worker02 by cp01 Normal AddedInterface 49s multus Add eth0 [fd00::4e4/128 10.128.8.218/32] from cilium Normal Pulled 49s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 49s kubelet Created container client Normal Started 49s kubelet Started container client Jun 19 19:06:49.470: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-7704 logs client-a-2bsw6 --tail=100' Jun 19 19:06:49.647: INFO: stderr: "" Jun 19 19:06:49.647: INFO: stdout: "TIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\n" Jun 19 19:06:49.647: INFO: Last 100 log lines of client-a-2bsw6: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Jun 19 19:06:49.647: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-7704 describe po pod-b-w85wl' Jun 19 19:06:49.816: INFO: stderr: "" Jun 19 19:06:49.816: INFO: stdout: "Name: pod-b-w85wl\nNamespace: e2e-network-policy-7704\nPriority: 0\nService Account: default\nNode: worker01/192.168.200.31\nStart Time: Mon, 19 Jun 2023 19:05:03 +0000\nLabels: pod-name=pod-b\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::331\",\n \"10.128.7.131\"\n ],\n \"mac\": \"0e:fb:ba:d3:f8:7d\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::331\",\n \"10.128.7.131\"\n ],\n \"mac\": \"0e:fb:ba:d3:f8:7d\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.7.131\nIPs:\n IP: 10.128.7.131\n IP: fd00::331\nContainers:\n pod-b-container-80:\n Container ID: cri-o://7cf4c08e098899ba13189944e74545c6f3dc086cb720d7dfc2abfe2f68fb6e1f\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 19 Jun 2023 19:05:04 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tslrq (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-tslrq:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 106s default-scheduler Successfully assigned e2e-network-policy-7704/pod-b-w85wl to worker01 by cp01\n Normal AddedInterface 106s multus Add eth0 [fd00::331/128 10.128.7.131/32] from cilium\n Normal Pulled 105s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 105s kubelet Created container pod-b-container-80\n Normal Started 105s kubelet Started container pod-b-container-80\n" Jun 19 19:06:49.816: INFO: Output of kubectl describe pod-b-w85wl: Name: pod-b-w85wl Namespace: e2e-network-policy-7704 Priority: 0 Service Account: default Node: worker01/192.168.200.31 Start Time: Mon, 19 Jun 2023 19:05:03 +0000 Labels: pod-name=pod-b Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::331", "10.128.7.131" ], "mac": "0e:fb:ba:d3:f8:7d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::331", "10.128.7.131" ], "mac": "0e:fb:ba:d3:f8:7d", "default": true, "dns": {} }] Status: Running IP: 10.128.7.131 IPs: IP: 10.128.7.131 IP: fd00::331 Containers: pod-b-container-80: Container ID: cri-o://7cf4c08e098899ba13189944e74545c6f3dc086cb720d7dfc2abfe2f68fb6e1f Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 80/TCP Host Port: 0/TCP Args: porter State: Running Started: Mon, 19 Jun 2023 19:05:04 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_80: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tslrq (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-tslrq: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 106s default-scheduler Successfully assigned e2e-network-policy-7704/pod-b-w85wl to worker01 by cp01 Normal AddedInterface 106s multus Add eth0 [fd00::331/128 10.128.7.131/32] from cilium Normal Pulled 105s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 105s kubelet Created container pod-b-container-80 Normal Started 105s kubelet Started container pod-b-container-80 Jun 19 19:06:49.816: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-7704 logs pod-b-w85wl --tail=100' Jun 19 19:06:49.980: INFO: stderr: "" Jun 19 19:06:49.980: INFO: stdout: "" Jun 19 19:06:49.980: INFO: Last 100 log lines of pod-b-w85wl: Jun 19 19:06:49.980: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-7704 describe po server-rwvl2' Jun 19 19:06:50.135: INFO: stderr: "" Jun 19 19:06:50.135: INFO: stdout: "Name: server-rwvl2\nNamespace: e2e-network-policy-7704\nPriority: 0\nService Account: default\nNode: worker01/192.168.200.31\nStart Time: Mon, 19 Jun 2023 19:04:48 +0000\nLabels: pod-name=server\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::334\",\n \"10.128.7.138\"\n ],\n \"mac\": \"7e:6a:55:6c:31:0d\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::334\",\n \"10.128.7.138\"\n ],\n \"mac\": \"7e:6a:55:6c:31:0d\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.7.138\nIPs:\n IP: 10.128.7.138\n IP: fd00::334\nContainers:\n server-container-80:\n Container ID: cri-o://da6eb24826a44e811a8bc217ebd237397a24a4846a5444fa95fd528df4d1226a\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 19 Jun 2023 19:04:50 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4g6k (ro)\n server-container-81:\n Container ID: cri-o://2543f8a8eb11f0555c2a7be7c5f963dcdad6bd96e4678188ea8e8221d7005eb0\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 81/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 19 Jun 2023 19:04:50 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_81: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4g6k (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-c4g6k:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2m1s default-scheduler Successfully assigned e2e-network-policy-7704/server-rwvl2 to worker01 by cp01\n Normal AddedInterface 2m1s multus Add eth0 [fd00::334/128 10.128.7.138/32] from cilium\n Normal Pulled 2m1s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 2m kubelet Created container server-container-80\n Normal Started 2m kubelet Started container server-container-80\n Normal Pulled 2m kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 2m kubelet Created container server-container-81\n Normal Started 2m kubelet Started container server-container-81\n" Jun 19 19:06:50.135: INFO: Output of kubectl describe server-rwvl2: Name: server-rwvl2 Namespace: e2e-network-policy-7704 Priority: 0 Service Account: default Node: worker01/192.168.200.31 Start Time: Mon, 19 Jun 2023 19:04:48 +0000 Labels: pod-name=server Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::334", "10.128.7.138" ], "mac": "7e:6a:55:6c:31:0d", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::334", "10.128.7.138" ], "mac": "7e:6a:55:6c:31:0d", "default": true, "dns": {} }] Status: Running IP: 10.128.7.138 IPs: IP: 10.128.7.138 IP: fd00::334 Containers: server-container-80: Container ID: cri-o://da6eb24826a44e811a8bc217ebd237397a24a4846a5444fa95fd528df4d1226a Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 80/TCP Host Port: 0/TCP Args: porter State: Running Started: Mon, 19 Jun 2023 19:04:50 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_80: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4g6k (ro) server-container-81: Container ID: cri-o://2543f8a8eb11f0555c2a7be7c5f963dcdad6bd96e4678188ea8e8221d7005eb0 Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 81/TCP Host Port: 0/TCP Args: porter State: Running Started: Mon, 19 Jun 2023 19:04:50 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_81: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4g6k (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-c4g6k: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m1s default-scheduler Successfully assigned e2e-network-policy-7704/server-rwvl2 to worker01 by cp01 Normal AddedInterface 2m1s multus Add eth0 [fd00::334/128 10.128.7.138/32] from cilium Normal Pulled 2m1s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 2m kubelet Created container server-container-80 Normal Started 2m kubelet Started container server-container-80 Normal Pulled 2m kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 2m kubelet Created container server-container-81 Normal Started 2m kubelet Started container server-container-81 Jun 19 19:06:50.135: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-7704 logs server-rwvl2 --tail=100' Jun 19 19:06:50.277: INFO: stderr: "Defaulted container \"server-container-80\" out of: server-container-80, server-container-81\n" Jun 19 19:06:50.277: INFO: stdout: "" Jun 19 19:06:50.277: INFO: Last 100 log lines of server-rwvl2: Jun 19 19:06:50.301: FAIL: Pod client-a-2bsw6 should be able to connect to service svc-server, but was not able to connect. Pod logs: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Current NetworkPolicies: [{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-7704 fdb41465-7f8e-49ff-8eb7-77f63375b675 90106 1 2023-06-19 19:05:09 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-19 19:05:09 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.7.138/32,Except:[],}}]}] [Egress]} {[]}}] Pods: [Pod: client-a-2bsw6, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:06:46 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:06:46 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.8.218,StartTime:2023-06-19 19:05:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-06-19 19:06:00 +0000 UTC,FinishedAt:2023-06-19 19:06:45 +0000 UTC,ContainerID:cri-o://1d83f8fc33d46d7607f18c02c4a83499cf21915cafd91d393393d8ef3a331f77,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://1d83f8fc33d46d7607f18c02c4a83499cf21915cafd91d393393d8ef3a331f77,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.218,},PodIP{IP:fd00::4e4,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: pod-b-w85wl, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.7.131,StartTime:2023-06-19 19:05:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pod-b-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 19:05:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://7cf4c08e098899ba13189944e74545c6f3dc086cb720d7dfc2abfe2f68fb6e1f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.7.131,},PodIP{IP:fd00::331,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-rwvl2, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:04:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:04:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:04:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:04:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.7.138,StartTime:2023-06-19 19:04:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 19:04:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://da6eb24826a44e811a8bc217ebd237397a24a4846a5444fa95fd528df4d1226a,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 19:04:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://2543f8a8eb11f0555c2a7be7c5f963dcdad6bd96e4678188ea8e8221d7005eb0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.7.138,},PodIP{IP:fd00::334,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Full Stack Trace k8s.io/kubernetes/test/e2e/network/netpol.checkConnectivity(0xc001eade00, 0xc001524dc0, 0xc0069e6900, 0xc006690c80) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941 +0x355 k8s.io/kubernetes/test/e2e/network/netpol.testCanConnectProtocol(0xc001eade00, 0xc001524dc0, {0x8a33123, 0x8}, 0xc006690c80, 0xc001eae3e0?, {0x8a24aec, 0x3}) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1913 +0x1be k8s.io/kubernetes/test/e2e/network/netpol.testCanConnect(...) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1897 k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.27.4() k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1410 +0x47 github.com/onsi/ginkgo/v2.By({0x8c00310, 0x3d}, {0xc0063a5e50, 0x1, 0x0?}) github.com/onsi/ginkgo/v2@v2.4.0/core_dsl.go:535 +0x525 k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.27() k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1409 +0x8fc github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2e8b77e, 0xc0009b1980}) github.com/onsi/ginkgo/v2@v2.4.0/internal/node.go:449 +0x1b github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:757 +0x98 created by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:745 +0xe3d STEP: Cleaning up the pod client-a-2bsw6 06/19/23 19:06:50.301 STEP: Cleaning up the policy. 06/19/23 19:06:50.344 STEP: Cleaning up the server. 06/19/23 19:06:50.355 STEP: Cleaning up the server's service. 06/19/23 19:06:50.373 [AfterEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:96 STEP: Cleaning up the server. 06/19/23 19:06:50.456 STEP: Cleaning up the server's service. 06/19/23 19:06:50.48 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] dump namespaces | framework.go:196 STEP: dump namespace information after failure 06/19/23 19:06:50.552 STEP: Collecting events from namespace "e2e-network-policy-7704". 06/19/23 19:06:50.552 STEP: Found 41 events. 06/19/23 19:06:50.562 Jun 19 19:06:50.563: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-2bsw6: { } Scheduled: Successfully assigned e2e-network-policy-7704/client-a-2bsw6 to worker02 by cp01 Jun 19 19:06:50.563: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-7xhc4: { } Scheduled: Successfully assigned e2e-network-policy-7704/client-a-7xhc4 to worker02 by cp01 Jun 19 19:06:50.563: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-tpzlh: { } Scheduled: Successfully assigned e2e-network-policy-7704/client-a-tpzlh to worker02 by cp01 Jun 19 19:06:50.563: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-rjwmz: { } Scheduled: Successfully assigned e2e-network-policy-7704/client-can-connect-80-rjwmz to worker02 by cp01 Jun 19 19:06:50.563: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-fq85g: { } Scheduled: Successfully assigned e2e-network-policy-7704/client-can-connect-81-fq85g to worker02 by cp01 Jun 19 19:06:50.563: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-b-w85wl: { } Scheduled: Successfully assigned e2e-network-policy-7704/pod-b-w85wl to worker01 by cp01 Jun 19 19:06:50.563: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-rwvl2: { } Scheduled: Successfully assigned e2e-network-policy-7704/server-rwvl2 to worker01 by cp01 Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:49 +0000 UTC - event for server-rwvl2: {multus } AddedInterface: Add eth0 [fd00::334/128 10.128.7.138/32] from cilium Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:49 +0000 UTC - event for server-rwvl2: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:50 +0000 UTC - event for server-rwvl2: {kubelet worker01} Started: Started container server-container-81 Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:50 +0000 UTC - event for server-rwvl2: {kubelet worker01} Created: Created container server-container-81 Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:50 +0000 UTC - event for server-rwvl2: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:50 +0000 UTC - event for server-rwvl2: {kubelet worker01} Started: Started container server-container-80 Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:50 +0000 UTC - event for server-rwvl2: {kubelet worker01} Created: Created container server-container-80 Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:51 +0000 UTC - event for client-can-connect-80-rjwmz: {multus } AddedInterface: Add eth0 [fd00::4ac/128 10.128.8.210/32] from cilium Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:51 +0000 UTC - event for client-can-connect-80-rjwmz: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:51 +0000 UTC - event for client-can-connect-80-rjwmz: {kubelet worker02} Created: Created container client Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:52 +0000 UTC - event for client-can-connect-80-rjwmz: {kubelet worker02} Started: Started container client Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:57 +0000 UTC - event for client-can-connect-81-fq85g: {multus } AddedInterface: Add eth0 [fd00::42f/128 10.128.8.197/32] from cilium Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:57 +0000 UTC - event for client-can-connect-81-fq85g: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:58 +0000 UTC - event for client-can-connect-81-fq85g: {kubelet worker02} Started: Started container client Jun 19 19:06:50.563: INFO: At 2023-06-19 19:04:58 +0000 UTC - event for client-can-connect-81-fq85g: {kubelet worker02} Created: Created container client Jun 19 19:06:50.563: INFO: At 2023-06-19 19:05:03 +0000 UTC - event for pod-b-w85wl: {multus } AddedInterface: Add eth0 [fd00::331/128 10.128.7.131/32] from cilium Jun 19 19:06:50.563: INFO: At 2023-06-19 19:05:04 +0000 UTC - event for pod-b-w85wl: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:06:50.563: INFO: At 2023-06-19 19:05:04 +0000 UTC - event for pod-b-w85wl: {kubelet worker01} Created: Created container pod-b-container-80 Jun 19 19:06:50.563: INFO: At 2023-06-19 19:05:04 +0000 UTC - event for pod-b-w85wl: {kubelet worker01} Started: Started container pod-b-container-80 Jun 19 19:06:50.563: INFO: At 2023-06-19 19:05:06 +0000 UTC - event for client-a-7xhc4: {kubelet worker02} Created: Created container client Jun 19 19:06:50.563: INFO: At 2023-06-19 19:05:06 +0000 UTC - event for client-a-7xhc4: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:06:50.563: INFO: At 2023-06-19 19:05:06 +0000 UTC - event for client-a-7xhc4: {kubelet worker02} Started: Started container client Jun 19 19:06:50.563: INFO: At 2023-06-19 19:05:06 +0000 UTC - event for client-a-7xhc4: {multus } AddedInterface: Add eth0 [fd00::475/128 10.128.8.241/32] from cilium Jun 19 19:06:50.563: INFO: At 2023-06-19 19:05:10 +0000 UTC - event for client-a-tpzlh: {kubelet worker02} Started: Started container client Jun 19 19:06:50.563: INFO: At 2023-06-19 19:05:10 +0000 UTC - event for client-a-tpzlh: {kubelet worker02} Created: Created container client Jun 19 19:06:50.563: INFO: At 2023-06-19 19:05:10 +0000 UTC - event for client-a-tpzlh: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:06:50.563: INFO: At 2023-06-19 19:05:10 +0000 UTC - event for client-a-tpzlh: {multus } AddedInterface: Add eth0 [fd00::4c3/128 10.128.8.237/32] from cilium Jun 19 19:06:50.563: INFO: At 2023-06-19 19:06:00 +0000 UTC - event for client-a-2bsw6: {multus } AddedInterface: Add eth0 [fd00::4e4/128 10.128.8.218/32] from cilium Jun 19 19:06:50.563: INFO: At 2023-06-19 19:06:00 +0000 UTC - event for client-a-2bsw6: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:06:50.563: INFO: At 2023-06-19 19:06:00 +0000 UTC - event for client-a-2bsw6: {kubelet worker02} Started: Started container client Jun 19 19:06:50.563: INFO: At 2023-06-19 19:06:00 +0000 UTC - event for client-a-2bsw6: {kubelet worker02} Created: Created container client Jun 19 19:06:50.563: INFO: At 2023-06-19 19:06:50 +0000 UTC - event for pod-b-w85wl: {kubelet worker01} Killing: Stopping container pod-b-container-80 Jun 19 19:06:50.563: INFO: At 2023-06-19 19:06:50 +0000 UTC - event for server-rwvl2: {kubelet worker01} Killing: Stopping container server-container-80 Jun 19 19:06:50.563: INFO: At 2023-06-19 19:06:50 +0000 UTC - event for server-rwvl2: {kubelet worker01} Killing: Stopping container server-container-81 Jun 19 19:06:50.587: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 19:06:50.587: INFO: pod-b-w85wl worker01 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:05:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:05:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:05:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:05:03 +0000 UTC }] Jun 19 19:06:50.587: INFO: server-rwvl2 worker01 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:04:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:04:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:04:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:04:48 +0000 UTC }] Jun 19 19:06:50.587: INFO: Jun 19 19:06:50.597: INFO: skipping dumping cluster info - cluster too large [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] tear down framework | framework.go:193 STEP: Destroying namespace "e2e-network-policy-7704" for this suite. 06/19/23 19:06:50.597 fail [k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941]: Jun 19 19:06:50.301: Pod client-a-2bsw6 should be able to connect to service svc-server, but was not able to connect. Pod logs: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Current NetworkPolicies: [{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-7704 fdb41465-7f8e-49ff-8eb7-77f63375b675 90106 1 2023-06-19 19:05:09 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-19 19:05:09 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.7.138/32,Except:[],}}]}] [Egress]} {[]}}] Pods: [Pod: client-a-2bsw6, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:06:46 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:06:46 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.8.218,StartTime:2023-06-19 19:05:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-06-19 19:06:00 +0000 UTC,FinishedAt:2023-06-19 19:06:45 +0000 UTC,ContainerID:cri-o://1d83f8fc33d46d7607f18c02c4a83499cf21915cafd91d393393d8ef3a331f77,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://1d83f8fc33d46d7607f18c02c4a83499cf21915cafd91d393393d8ef3a331f77,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.218,},PodIP{IP:fd00::4e4,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: pod-b-w85wl, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.7.131,StartTime:2023-06-19 19:05:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pod-b-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 19:05:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://7cf4c08e098899ba13189944e74545c6f3dc086cb720d7dfc2abfe2f68fb6e1f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.7.131,},PodIP{IP:fd00::331,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-rwvl2, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:04:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:04:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:04:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:04:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.7.138,StartTime:2023-06-19 19:04:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 19:04:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://da6eb24826a44e811a8bc217ebd237397a24a4846a5444fa95fd528df4d1226a,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 19:04:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://2543f8a8eb11f0555c2a7be7c5f963dcdad6bd96e4678188ea8e8221d7005eb0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.7.138,},PodIP{IP:fd00::334,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Ginkgo exit error 1: exit with code 1 failed: (2m3s) 2023-06-19T19:06:50 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (25.8s) 2023-06-19T19:07:04 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (2m8s) 2023-06-19T19:07:14 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" Jun 19 19:05:21.581: INFO: Enabling in-tree volume drivers [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/test.go:58 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] set up framework | framework.go:178 STEP: Creating a kubernetes client 06/19/23 19:05:22.386 STEP: Building a namespace api object, basename network-policy 06/19/23 19:05:22.389 Jun 19 19:05:22.454: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace 06/19/23 19:05:22.634 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/19/23 19:05:22.639 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:72 [BeforeEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:78 STEP: Creating a simple server that serves on port 80 and 81. 06/19/23 19:05:22.644 STEP: Creating a server pod server in namespace e2e-network-policy-4449 06/19/23 19:05:22.644 W0619 19:05:22.675864 3200 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "server-container-80", "server-container-81" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "server-container-80", "server-container-81" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "server-container-80", "server-container-81" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "server-container-80", "server-container-81" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:05:22.676: INFO: Created pod server-lpf2s STEP: Creating a service svc-server for pod server in namespace e2e-network-policy-4449 06/19/23 19:05:22.676 Jun 19 19:05:22.718: INFO: Created service svc-server STEP: Waiting for pod ready 06/19/23 19:05:22.718 Jun 19 19:05:22.718: INFO: Waiting up to 5m0s for pod "server-lpf2s" in namespace "e2e-network-policy-4449" to be "running and ready" Jun 19 19:05:22.737: INFO: Pod "server-lpf2s": Phase="Pending", Reason="", readiness=false. Elapsed: 18.748267ms Jun 19 19:05:22.737: INFO: The phase of Pod server-lpf2s is Pending, waiting for it to be Running (with Ready = true) Jun 19 19:05:24.741: INFO: Pod "server-lpf2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023707322s Jun 19 19:05:24.742: INFO: The phase of Pod server-lpf2s is Pending, waiting for it to be Running (with Ready = true) Jun 19 19:05:26.748: INFO: Pod "server-lpf2s": Phase="Running", Reason="", readiness=true. Elapsed: 4.030325335s Jun 19 19:05:26.748: INFO: The phase of Pod server-lpf2s is Running (Ready = true) Jun 19 19:05:26.748: INFO: Pod "server-lpf2s" satisfied condition "running and ready" STEP: Testing pods can connect to both ports when no policy is present. 06/19/23 19:05:26.748 STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. 06/19/23 19:05:26.748 W0619 19:05:26.770386 3200 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:05:26.770: INFO: Waiting for client-can-connect-80-jfp4d to complete. Jun 19 19:05:26.770: INFO: Waiting up to 3m0s for pod "client-can-connect-80-jfp4d" in namespace "e2e-network-policy-4449" to be "completed" Jun 19 19:05:26.776: INFO: Pod "client-can-connect-80-jfp4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206489ms Jun 19 19:05:28.784: INFO: Pod "client-can-connect-80-jfp4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013504707s Jun 19 19:05:30.782: INFO: Pod "client-can-connect-80-jfp4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011699485s Jun 19 19:05:30.782: INFO: Pod "client-can-connect-80-jfp4d" satisfied condition "completed" Jun 19 19:05:30.782: INFO: Waiting for client-can-connect-80-jfp4d to complete. Jun 19 19:05:30.782: INFO: Waiting up to 5m0s for pod "client-can-connect-80-jfp4d" in namespace "e2e-network-policy-4449" to be "Succeeded or Failed" Jun 19 19:05:30.788: INFO: Pod "client-can-connect-80-jfp4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.950348ms STEP: Saw pod success 06/19/23 19:05:30.788 Jun 19 19:05:30.788: INFO: Pod "client-can-connect-80-jfp4d" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-80-jfp4d 06/19/23 19:05:30.788 STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. 06/19/23 19:05:30.812 W0619 19:05:30.821591 3200 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:05:30.821: INFO: Waiting for client-can-connect-81-2bvp2 to complete. Jun 19 19:05:30.821: INFO: Waiting up to 3m0s for pod "client-can-connect-81-2bvp2" in namespace "e2e-network-policy-4449" to be "completed" Jun 19 19:05:30.825: INFO: Pod "client-can-connect-81-2bvp2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098212ms Jun 19 19:05:32.833: INFO: Pod "client-can-connect-81-2bvp2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011421653s Jun 19 19:05:34.832: INFO: Pod "client-can-connect-81-2bvp2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010980106s Jun 19 19:05:34.832: INFO: Pod "client-can-connect-81-2bvp2" satisfied condition "completed" Jun 19 19:05:34.832: INFO: Waiting for client-can-connect-81-2bvp2 to complete. Jun 19 19:05:34.832: INFO: Waiting up to 5m0s for pod "client-can-connect-81-2bvp2" in namespace "e2e-network-policy-4449" to be "Succeeded or Failed" Jun 19 19:05:34.840: INFO: Pod "client-can-connect-81-2bvp2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.022856ms STEP: Saw pod success 06/19/23 19:05:34.84 Jun 19 19:05:34.840: INFO: Pod "client-can-connect-81-2bvp2" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-81-2bvp2 06/19/23 19:05:34.84 [It] should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1477 STEP: Creating client-a which should not be able to contact the server. 06/19/23 19:05:34.88 STEP: Creating client pod client-a that should not be able to connect to svc-server. 06/19/23 19:05:34.88 W0619 19:05:34.889064 3200 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:05:34.889: INFO: Waiting for client-a-28q6h to complete. Jun 19 19:05:34.889: INFO: Waiting up to 5m0s for pod "client-a-28q6h" in namespace "e2e-network-policy-4449" to be "Succeeded or Failed" Jun 19 19:05:34.894: INFO: Pod "client-a-28q6h": Phase="Pending", Reason="", readiness=false. Elapsed: 4.847177ms Jun 19 19:05:36.900: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 2.0117364s Jun 19 19:05:38.902: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 4.012939907s Jun 19 19:05:40.901: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 6.01178049s Jun 19 19:05:42.905: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 8.016069078s Jun 19 19:05:44.898: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 10.009687159s Jun 19 19:05:46.899: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 12.010111407s Jun 19 19:05:48.900: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 14.011726598s Jun 19 19:05:50.901: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 16.01188683s Jun 19 19:05:52.899: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 18.010611386s Jun 19 19:05:54.899: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 20.010645007s Jun 19 19:05:56.902: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 22.013245896s Jun 19 19:05:58.900: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 24.011710528s Jun 19 19:06:00.900: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 26.011054477s Jun 19 19:06:02.901: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 28.012060751s Jun 19 19:06:04.900: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 30.011005989s Jun 19 19:06:06.899: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 32.009804636s Jun 19 19:06:08.913: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 34.024231996s Jun 19 19:06:10.903: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 36.013984469s Jun 19 19:06:12.930: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 38.041773873s Jun 19 19:06:14.900: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 40.011669889s Jun 19 19:06:16.898: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 42.009765366s Jun 19 19:06:18.899: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 44.009962982s Jun 19 19:06:20.904: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=true. Elapsed: 46.014793291s Jun 19 19:06:22.901: INFO: Pod "client-a-28q6h": Phase="Running", Reason="", readiness=false. Elapsed: 48.01242687s Jun 19 19:06:24.905: INFO: Pod "client-a-28q6h": Phase="Failed", Reason="", readiness=false. Elapsed: 50.016474315s STEP: Cleaning up the pod client-a-28q6h 06/19/23 19:06:24.905 STEP: Creating client-a which should now be able to contact the server. 06/19/23 19:06:24.957 STEP: Creating client pod client-a that should successfully connect to svc-server. 06/19/23 19:06:24.957 W0619 19:06:24.982734 3200 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 19:06:24.982: INFO: Waiting for client-a-ht49w to complete. Jun 19 19:06:24.982: INFO: Waiting up to 3m0s for pod "client-a-ht49w" in namespace "e2e-network-policy-4449" to be "completed" Jun 19 19:06:25.004: INFO: Pod "client-a-ht49w": Phase="Pending", Reason="", readiness=false. Elapsed: 21.324587ms Jun 19 19:06:27.010: INFO: Pod "client-a-ht49w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027449192s Jun 19 19:06:29.012: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 4.029605492s Jun 19 19:06:31.015: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 6.03287047s Jun 19 19:06:33.009: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 8.027051619s Jun 19 19:06:35.011: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 10.028850675s Jun 19 19:06:37.012: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 12.029236534s Jun 19 19:06:39.014: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 14.031448973s Jun 19 19:06:41.021: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 16.038720426s Jun 19 19:06:43.009: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 18.026740291s Jun 19 19:06:45.011: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 20.028973658s Jun 19 19:06:47.013: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 22.030387626s Jun 19 19:06:49.009: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 24.026953226s Jun 19 19:06:51.015: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 26.032350118s Jun 19 19:06:53.010: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 28.027752764s Jun 19 19:06:55.011: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 30.028902366s Jun 19 19:06:57.009: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 32.027102868s Jun 19 19:06:59.011: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 34.028547686s Jun 19 19:07:01.012: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 36.029330379s Jun 19 19:07:03.013: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 38.030859958s Jun 19 19:07:05.009: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 40.026876479s Jun 19 19:07:07.010: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 42.027969786s Jun 19 19:07:09.010: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 44.027203596s Jun 19 19:07:11.012: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=true. Elapsed: 46.030052412s Jun 19 19:07:13.012: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=false. Elapsed: 48.029675291s Jun 19 19:07:15.029: INFO: Pod "client-a-ht49w": Phase="Running", Reason="", readiness=false. Elapsed: 50.04627957s Jun 19 19:07:17.008: INFO: Pod "client-a-ht49w": Phase="Failed", Reason="", readiness=false. Elapsed: 52.025974703s Jun 19 19:07:17.008: INFO: Pod "client-a-ht49w" satisfied condition "completed" Jun 19 19:07:17.008: INFO: Waiting for client-a-ht49w to complete. Jun 19 19:07:17.008: INFO: Waiting up to 5m0s for pod "client-a-ht49w" in namespace "e2e-network-policy-4449" to be "Succeeded or Failed" Jun 19 19:07:17.013: INFO: Pod "client-a-ht49w": Phase="Failed", Reason="", readiness=false. Elapsed: 4.153192ms Jun 19 19:07:17.019: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-4449 describe po client-a-ht49w' Jun 19 19:07:17.168: INFO: stderr: "" Jun 19 19:07:17.168: INFO: stdout: "Name: client-a-ht49w\nNamespace: e2e-network-policy-4449\nPriority: 0\nService Account: default\nNode: worker03/192.168.200.33\nStart Time: Mon, 19 Jun 2023 19:06:25 +0000\nLabels: pod-name=client-a\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::58b\",\n \"10.128.11.185\"\n ],\n \"mac\": \"9a:c6:e7:9d:90:04\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::58b\",\n \"10.128.11.185\"\n ],\n \"mac\": \"9a:c6:e7:9d:90:04\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Failed\nIP: 10.128.11.185\nIPs:\n IP: 10.128.11.185\n IP: fd00::58b\nContainers:\n client:\n Container ID: cri-o://69bb498cbd188bb71945f490137b001b47bd47cabecc766faee31aa927ee5575\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: \n Host Port: \n Command:\n /bin/sh\n Args:\n -c\n for i in $(seq 1 5); do /agnhost connect 172.30.119.71:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1\n State: Terminated\n Reason: Error\n Exit Code: 1\n Started: Mon, 19 Jun 2023 19:06:26 +0000\n Finished: Mon, 19 Jun 2023 19:07:11 +0000\n Ready: False\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vwwwk (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-vwwwk:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 52s default-scheduler Successfully assigned e2e-network-policy-4449/client-a-ht49w to worker03 by cp01\n Normal AddedInterface 51s multus Add eth0 [fd00::58b/128 10.128.11.185/32] from cilium\n Normal Pulled 51s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 51s kubelet Created container client\n Normal Started 51s kubelet Started container client\n" Jun 19 19:07:17.168: INFO: Output of kubectl describe client-a-ht49w: Name: client-a-ht49w Namespace: e2e-network-policy-4449 Priority: 0 Service Account: default Node: worker03/192.168.200.33 Start Time: Mon, 19 Jun 2023 19:06:25 +0000 Labels: pod-name=client-a Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::58b", "10.128.11.185" ], "mac": "9a:c6:e7:9d:90:04", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::58b", "10.128.11.185" ], "mac": "9a:c6:e7:9d:90:04", "default": true, "dns": {} }] Status: Failed IP: 10.128.11.185 IPs: IP: 10.128.11.185 IP: fd00::58b Containers: client: Container ID: cri-o://69bb498cbd188bb71945f490137b001b47bd47cabecc766faee31aa927ee5575 Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: Host Port: Command: /bin/sh Args: -c for i in $(seq 1 5); do /agnhost connect 172.30.119.71:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1 State: Terminated Reason: Error Exit Code: 1 Started: Mon, 19 Jun 2023 19:06:26 +0000 Finished: Mon, 19 Jun 2023 19:07:11 +0000 Ready: False Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vwwwk (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-vwwwk: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 52s default-scheduler Successfully assigned e2e-network-policy-4449/client-a-ht49w to worker03 by cp01 Normal AddedInterface 51s multus Add eth0 [fd00::58b/128 10.128.11.185/32] from cilium Normal Pulled 51s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 51s kubelet Created container client Normal Started 51s kubelet Started container client Jun 19 19:07:17.168: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-4449 logs client-a-ht49w --tail=100' Jun 19 19:07:17.322: INFO: stderr: "" Jun 19 19:07:17.322: INFO: stdout: "TIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\n" Jun 19 19:07:17.322: INFO: Last 100 log lines of client-a-ht49w: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Jun 19 19:07:17.322: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-4449 describe po server-lpf2s' Jun 19 19:07:17.490: INFO: stderr: "" Jun 19 19:07:17.490: INFO: stdout: "Name: server-lpf2s\nNamespace: e2e-network-policy-4449\nPriority: 0\nService Account: default\nNode: worker01/192.168.200.31\nStart Time: Mon, 19 Jun 2023 19:05:22 +0000\nLabels: pod-name=server\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::3d3\",\n \"10.128.6.214\"\n ],\n \"mac\": \"ce:67:23:da:c3:d4\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::3d3\",\n \"10.128.6.214\"\n ],\n \"mac\": \"ce:67:23:da:c3:d4\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.6.214\nIPs:\n IP: 10.128.6.214\n IP: fd00::3d3\nContainers:\n server-container-80:\n Container ID: cri-o://13564fbeb8cbbe13683a785b8bb7fb13dc4906a4f8b3def4003947783073b76f\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 19 Jun 2023 19:05:23 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hvq8q (ro)\n server-container-81:\n Container ID: cri-o://b7191a90e25760b96574a584433b2f79fbfe1c9a15f05f960df455fffe13115f\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 81/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 19 Jun 2023 19:05:24 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_81: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hvq8q (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-hvq8q:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 114s default-scheduler Successfully assigned e2e-network-policy-4449/server-lpf2s to worker01 by cp01\n Normal AddedInterface 114s multus Add eth0 [fd00::3d3/128 10.128.6.214/32] from cilium\n Normal Pulled 114s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 114s kubelet Created container server-container-80\n Normal Started 114s kubelet Started container server-container-80\n Normal Pulled 114s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 113s kubelet Created container server-container-81\n Normal Started 113s kubelet Started container server-container-81\n" Jun 19 19:07:17.490: INFO: Output of kubectl describe server-lpf2s: Name: server-lpf2s Namespace: e2e-network-policy-4449 Priority: 0 Service Account: default Node: worker01/192.168.200.31 Start Time: Mon, 19 Jun 2023 19:05:22 +0000 Labels: pod-name=server Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::3d3", "10.128.6.214" ], "mac": "ce:67:23:da:c3:d4", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::3d3", "10.128.6.214" ], "mac": "ce:67:23:da:c3:d4", "default": true, "dns": {} }] Status: Running IP: 10.128.6.214 IPs: IP: 10.128.6.214 IP: fd00::3d3 Containers: server-container-80: Container ID: cri-o://13564fbeb8cbbe13683a785b8bb7fb13dc4906a4f8b3def4003947783073b76f Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 80/TCP Host Port: 0/TCP Args: porter State: Running Started: Mon, 19 Jun 2023 19:05:23 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_80: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hvq8q (ro) server-container-81: Container ID: cri-o://b7191a90e25760b96574a584433b2f79fbfe1c9a15f05f960df455fffe13115f Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 81/TCP Host Port: 0/TCP Args: porter State: Running Started: Mon, 19 Jun 2023 19:05:24 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_81: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hvq8q (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-hvq8q: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 114s default-scheduler Successfully assigned e2e-network-policy-4449/server-lpf2s to worker01 by cp01 Normal AddedInterface 114s multus Add eth0 [fd00::3d3/128 10.128.6.214/32] from cilium Normal Pulled 114s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 114s kubelet Created container server-container-80 Normal Started 114s kubelet Started container server-container-80 Normal Pulled 114s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 113s kubelet Created container server-container-81 Normal Started 113s kubelet Started container server-container-81 Jun 19 19:07:17.490: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-4449 logs server-lpf2s --tail=100' Jun 19 19:07:17.634: INFO: stderr: "Defaulted container \"server-container-80\" out of: server-container-80, server-container-81\n" Jun 19 19:07:17.634: INFO: stdout: "" Jun 19 19:07:17.634: INFO: Last 100 log lines of server-lpf2s: Jun 19 19:07:17.662: FAIL: Pod client-a-ht49w should be able to connect to service svc-server, but was not able to connect. Pod logs: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Current NetworkPolicies: [{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-4449 a218f00e-33b8-4c09-8ecb-1e7a5e672f71 92886 1 2023-06-19 19:06:24 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-19 19:06:24 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.6.214/32,Except:[],}}]}] [Egress]} {[]}} {{ } {deny-client-a-via-except-cidr-egress-rule e2e-network-policy-4449 7276d8d1-2a7f-4e82-b8a9-23ca1abfa42a 91091 1 2023-06-19 19:05:34 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-19 19:05:34 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.6.0/24,Except:[10.128.6.214/32],}}]}] [Egress]} {[]}}] Pods: [Pod: client-a-ht49w, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:06:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:07:12 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:07:12 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:06:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.11.185,StartTime:2023-06-19 19:06:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-06-19 19:06:26 +0000 UTC,FinishedAt:2023-06-19 19:07:11 +0000 UTC,ContainerID:cri-o://69bb498cbd188bb71945f490137b001b47bd47cabecc766faee31aa927ee5575,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://69bb498cbd188bb71945f490137b001b47bd47cabecc766faee31aa927ee5575,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.11.185,},PodIP{IP:fd00::58b,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-lpf2s, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.6.214,StartTime:2023-06-19 19:05:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 19:05:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://13564fbeb8cbbe13683a785b8bb7fb13dc4906a4f8b3def4003947783073b76f,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 19:05:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://b7191a90e25760b96574a584433b2f79fbfe1c9a15f05f960df455fffe13115f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.6.214,},PodIP{IP:fd00::3d3,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Full Stack Trace k8s.io/kubernetes/test/e2e/network/netpol.checkConnectivity(0xc001f2eb40, 0xc000db6580, 0xc001be3200, 0xc007c51180) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941 +0x355 k8s.io/kubernetes/test/e2e/network/netpol.testCanConnectProtocol(0xc001f2eb40, 0xc000db6580, {0x8a33123, 0x8}, 0xc007c51180, 0xc000331a10?, {0x8a24aec, 0x3}) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1913 +0x1be k8s.io/kubernetes/test/e2e/network/netpol.testCanConnect(...) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1897 k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.29.2() k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1569 +0x47 github.com/onsi/ginkgo/v2.By({0x8c200aa, 0x41}, {0xc007bf3e50, 0x1, 0x0?}) github.com/onsi/ginkgo/v2@v2.4.0/core_dsl.go:535 +0x525 k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.29() k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1568 +0xb5b github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2e8b77e, 0xc001ae1980}) github.com/onsi/ginkgo/v2@v2.4.0/internal/node.go:449 +0x1b github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:757 +0x98 created by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:745 +0xe3d STEP: Cleaning up the pod client-a-ht49w 06/19/23 19:07:17.662 STEP: Cleaning up the policy. 06/19/23 19:07:17.685 [AfterEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:96 STEP: Cleaning up the server. 06/19/23 19:07:17.694 STEP: Cleaning up the server's service. 06/19/23 19:07:17.708 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] dump namespaces | framework.go:196 STEP: dump namespace information after failure 06/19/23 19:07:17.767 STEP: Collecting events from namespace "e2e-network-policy-4449". 06/19/23 19:07:17.767 STEP: Found 30 events. 06/19/23 19:07:17.774 Jun 19 19:07:17.774: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-28q6h: { } Scheduled: Successfully assigned e2e-network-policy-4449/client-a-28q6h to worker02 by cp01 Jun 19 19:07:17.774: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-ht49w: { } Scheduled: Successfully assigned e2e-network-policy-4449/client-a-ht49w to worker03 by cp01 Jun 19 19:07:17.774: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-jfp4d: { } Scheduled: Successfully assigned e2e-network-policy-4449/client-can-connect-80-jfp4d to worker02 by cp01 Jun 19 19:07:17.774: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-2bvp2: { } Scheduled: Successfully assigned e2e-network-policy-4449/client-can-connect-81-2bvp2 to worker02 by cp01 Jun 19 19:07:17.774: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-lpf2s: { } Scheduled: Successfully assigned e2e-network-policy-4449/server-lpf2s to worker01 by cp01 Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:23 +0000 UTC - event for server-lpf2s: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:23 +0000 UTC - event for server-lpf2s: {kubelet worker01} Started: Started container server-container-80 Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:23 +0000 UTC - event for server-lpf2s: {kubelet worker01} Created: Created container server-container-80 Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:23 +0000 UTC - event for server-lpf2s: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:23 +0000 UTC - event for server-lpf2s: {multus } AddedInterface: Add eth0 [fd00::3d3/128 10.128.6.214/32] from cilium Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:24 +0000 UTC - event for server-lpf2s: {kubelet worker01} Started: Started container server-container-81 Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:24 +0000 UTC - event for server-lpf2s: {kubelet worker01} Created: Created container server-container-81 Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:27 +0000 UTC - event for client-can-connect-80-jfp4d: {kubelet worker02} Started: Started container client Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:27 +0000 UTC - event for client-can-connect-80-jfp4d: {kubelet worker02} Created: Created container client Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:27 +0000 UTC - event for client-can-connect-80-jfp4d: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:27 +0000 UTC - event for client-can-connect-80-jfp4d: {multus } AddedInterface: Add eth0 [fd00::4cf/128 10.128.9.152/32] from cilium Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:31 +0000 UTC - event for client-can-connect-81-2bvp2: {multus } AddedInterface: Add eth0 [fd00::4c5/128 10.128.9.133/32] from cilium Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:31 +0000 UTC - event for client-can-connect-81-2bvp2: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:31 +0000 UTC - event for client-can-connect-81-2bvp2: {kubelet worker02} Created: Created container client Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:31 +0000 UTC - event for client-can-connect-81-2bvp2: {kubelet worker02} Started: Started container client Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:35 +0000 UTC - event for client-a-28q6h: {multus } AddedInterface: Add eth0 [fd00::40d/128 10.128.8.149/32] from cilium Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:35 +0000 UTC - event for client-a-28q6h: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:36 +0000 UTC - event for client-a-28q6h: {kubelet worker02} Created: Created container client Jun 19 19:07:17.774: INFO: At 2023-06-19 19:05:36 +0000 UTC - event for client-a-28q6h: {kubelet worker02} Started: Started container client Jun 19 19:07:17.774: INFO: At 2023-06-19 19:06:26 +0000 UTC - event for client-a-ht49w: {multus } AddedInterface: Add eth0 [fd00::58b/128 10.128.11.185/32] from cilium Jun 19 19:07:17.774: INFO: At 2023-06-19 19:06:26 +0000 UTC - event for client-a-ht49w: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 19:07:17.774: INFO: At 2023-06-19 19:06:26 +0000 UTC - event for client-a-ht49w: {kubelet worker03} Created: Created container client Jun 19 19:07:17.774: INFO: At 2023-06-19 19:06:26 +0000 UTC - event for client-a-ht49w: {kubelet worker03} Started: Started container client Jun 19 19:07:17.774: INFO: At 2023-06-19 19:07:17 +0000 UTC - event for server-lpf2s: {kubelet worker01} Killing: Stopping container server-container-80 Jun 19 19:07:17.774: INFO: At 2023-06-19 19:07:17 +0000 UTC - event for server-lpf2s: {kubelet worker01} Killing: Stopping container server-container-81 Jun 19 19:07:17.780: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 19:07:17.780: INFO: server-lpf2s worker01 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:05:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:05:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:05:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 19:05:22 +0000 UTC }] Jun 19 19:07:17.780: INFO: Jun 19 19:07:17.795: INFO: skipping dumping cluster info - cluster too large [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] tear down framework | framework.go:193 STEP: Destroying namespace "e2e-network-policy-4449" for this suite. 06/19/23 19:07:17.795 fail [k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941]: Jun 19 19:07:17.662: Pod client-a-ht49w should be able to connect to service svc-server, but was not able to connect. Pod logs: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Current NetworkPolicies: [{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-4449 a218f00e-33b8-4c09-8ecb-1e7a5e672f71 92886 1 2023-06-19 19:06:24 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-19 19:06:24 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.6.214/32,Except:[],}}]}] [Egress]} {[]}} {{ } {deny-client-a-via-except-cidr-egress-rule e2e-network-policy-4449 7276d8d1-2a7f-4e82-b8a9-23ca1abfa42a 91091 1 2023-06-19 19:05:34 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-19 19:05:34 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.6.0/24,Except:[10.128.6.214/32],}}]}] [Egress]} {[]}}] Pods: [Pod: client-a-ht49w, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:06:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:07:12 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:07:12 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:06:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.11.185,StartTime:2023-06-19 19:06:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-06-19 19:06:26 +0000 UTC,FinishedAt:2023-06-19 19:07:11 +0000 UTC,ContainerID:cri-o://69bb498cbd188bb71945f490137b001b47bd47cabecc766faee31aa927ee5575,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://69bb498cbd188bb71945f490137b001b47bd47cabecc766faee31aa927ee5575,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.11.185,},PodIP{IP:fd00::58b,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-lpf2s, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 19:05:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.6.214,StartTime:2023-06-19 19:05:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 19:05:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://13564fbeb8cbbe13683a785b8bb7fb13dc4906a4f8b3def4003947783073b76f,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 19:05:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://b7191a90e25760b96574a584433b2f79fbfe1c9a15f05f960df455fffe13115f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.6.214,},PodIP{IP:fd00::3d3,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Ginkgo exit error 1: exit with code 1 failed: (1m56s) 2023-06-19T19:07:17 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m2s) 2023-06-19T19:07:18 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m0s) 2023-06-19T19:07:39 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m24s) 2023-06-19T19:07:59 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 3/67/67 "[sig-network] Service endpoints latency should not be very high [Conformance] [Serial] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]" passed: (11.4s) 2023-06-19T19:08:10 "[sig-network] Service endpoints latency should not be very high [Conformance] [Serial] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]" Failing tests: [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s] error: 3 fail, 64 pass, 0 skip (6m7s) ```
qmonnet commented 1 year ago

I'm force-pushing to address the conflict on Makefile.releases. I re-generated the commit from scratch by running scripts/add-release.sh $RELEASE again, and checked that the commit is identical to the previous iteration, save for the addressed conflict with the 1.13 and 1.12 targets in Makefile.releases.

$ git diff fork/pr/v1.11.18 -- {bundles,operator,manifests}/cilium.v1.11.18 Makefile.releases
diff --git a/Makefile.releases b/Makefile.releases
index 3ce2501f72d9..008b91a0e677 100644
--- a/Makefile.releases
+++ b/Makefile.releases
@@ -453,6 +453,26 @@ generate.configs.all: generate.configs.v1.13.3
 images.operator.v1.13.3 generate.configs.v1.13.3: cilium_version=1.13.3

+# Cilium v1.13.4
+
+images.all: images.operator.v1.13.4
+
+images.operator.all: images.operator.v1.13.4 generate.configs.v1.13.4
+generate.configs.all: generate.configs.v1.13.4
+
+images.operator.v1.13.4 generate.configs.v1.13.4: cilium_version=1.13.4
+
+
+# Cilium v1.12.11
+
+images.all: images.operator.v1.12.11
+
+images.operator.all: images.operator.v1.12.11 generate.configs.v1.12.11
+generate.configs.all: generate.configs.v1.12.11
+
+images.operator.v1.12.11 generate.configs.v1.12.11: cilium_version=1.12.11
+
+
 # Cilium v1.11.18

 images.all: images.operator.v1.11.18
qmonnet commented 1 year ago

All tests still passing as above.

results.txt ``` started: 0/1/67 "[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/2/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/3/67 "[sig-network] Services should complete a service status lifecycle [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/4/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/5/67 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/6/67 "[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/7/67 "[sig-network] Services should find a service from listing all namespaces [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/8/67 "[sig-network] NetworkPolicy API should support creating NetworkPolicy API operations [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/9/67 "[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/10/67 "[sig-network] Ingress API should support creating Ingress API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (2.1s) 2023-06-20T09:42:14 "[sig-network] Services should complete a service status lifecycle [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/11/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2.2s) 2023-06-20T09:42:14 "[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/12/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2.3s) 2023-06-20T09:42:14 "[sig-network] Ingress API should support creating Ingress API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/13/67 "[sig-network] DNS should provide DNS for ExternalName services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (2.3s) 2023-06-20T09:42:14 "[sig-network] NetworkPolicy API should support creating NetworkPolicy API operations [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/14/67 "[sig-network] IngressClass API should support creating IngressClass API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (2.9s) 2023-06-20T09:42:15 "[sig-network] Services should find a service from listing all namespaces [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/15/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1.9s) 2023-06-20T09:42:16 "[sig-network] IngressClass API should support creating IngressClass API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/16/67 "[sig-network] Services should provide secure master service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.2s) 2023-06-20T09:42:17 "[sig-network] Services should provide secure master service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/17/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (17.6s) 2023-06-20T09:42:30 "[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/18/67 "[sig-network] Services should test the lifecycle of an Endpoint [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.3s) 2023-06-20T09:42:31 "[sig-network] Services should test the lifecycle of an Endpoint [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/19/67 "[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (36.4s) 2023-06-20T09:42:48 "[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/20/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (38.5s) 2023-06-20T09:42:51 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/21/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (42.2s) 2023-06-20T09:42:56 "[sig-network] DNS should provide DNS for ExternalName services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/22/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (50.5s) 2023-06-20T09:43:21 "[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/23/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m31s) 2023-06-20T09:43:43 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/24/67 "[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] [Serial:Self] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m34s) 2023-06-20T09:43:48 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/25/67 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m36s) 2023-06-20T09:43:50 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/26/67 "[sig-network] Services should serve a basic endpoint from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m35s) 2023-06-20T09:43:50 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/27/67 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (3.7s) 2023-06-20T09:43:54 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/28/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m37s) 2023-06-20T09:43:54 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/29/67 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (5.6s) 2023-06-20T09:44:00 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/30/67 "[sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (18.8s) 2023-06-20T09:44:01 "[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] [Serial:Self] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/31/67 "[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.4s) 2023-06-20T09:44:03 "[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/32/67 "[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (15.2s) 2023-06-20T09:44:05 "[sig-network] Services should serve a basic endpoint from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/33/67 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (8.6s) 2023-06-20T09:44:11 "[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/34/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (25.9s) 2023-06-20T09:44:14 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/35/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (15.5s) 2023-06-20T09:44:16 "[sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/36/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (25.8s) 2023-06-20T09:44:31 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/37/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" Jun 20 09:44:12.060: INFO: Enabling in-tree volume drivers [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/test.go:58 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] set up framework | framework.go:178 STEP: Creating a kubernetes client 06/20/23 09:44:12.881 STEP: Building a namespace api object, basename network-policy 06/20/23 09:44:12.883 Jun 20 09:44:12.942: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace 06/20/23 09:44:13.107 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/20/23 09:44:13.111 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:72 [BeforeEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:78 STEP: Creating a simple server that serves on port 80 and 81. 06/20/23 09:44:13.115 STEP: Creating a server pod server in namespace e2e-network-policy-9921 06/20/23 09:44:13.115 W0620 09:44:13.140812 2033 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "server-container-80", "server-container-81" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "server-container-80", "server-container-81" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "server-container-80", "server-container-81" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "server-container-80", "server-container-81" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:44:13.141: INFO: Created pod server-427dh STEP: Creating a service svc-server for pod server in namespace e2e-network-policy-9921 06/20/23 09:44:13.141 Jun 20 09:44:13.185: INFO: Created service svc-server STEP: Waiting for pod ready 06/20/23 09:44:13.185 Jun 20 09:44:13.185: INFO: Waiting up to 5m0s for pod "server-427dh" in namespace "e2e-network-policy-9921" to be "running and ready" Jun 20 09:44:13.192: INFO: Pod "server-427dh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.180472ms Jun 20 09:44:13.192: INFO: The phase of Pod server-427dh is Pending, waiting for it to be Running (with Ready = true) Jun 20 09:44:15.199: INFO: Pod "server-427dh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014095342s Jun 20 09:44:15.199: INFO: The phase of Pod server-427dh is Pending, waiting for it to be Running (with Ready = true) Jun 20 09:44:17.213: INFO: Pod "server-427dh": Phase="Running", Reason="", readiness=true. Elapsed: 4.027187366s Jun 20 09:44:17.213: INFO: The phase of Pod server-427dh is Running (Ready = true) Jun 20 09:44:17.213: INFO: Pod "server-427dh" satisfied condition "running and ready" STEP: Testing pods can connect to both ports when no policy is present. 06/20/23 09:44:17.213 STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. 06/20/23 09:44:17.213 W0620 09:44:17.227368 2033 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:44:17.227: INFO: Waiting for client-can-connect-80-2785c to complete. Jun 20 09:44:17.227: INFO: Waiting up to 3m0s for pod "client-can-connect-80-2785c" in namespace "e2e-network-policy-9921" to be "completed" Jun 20 09:44:17.239: INFO: Pod "client-can-connect-80-2785c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.195459ms Jun 20 09:44:19.252: INFO: Pod "client-can-connect-80-2785c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02526747s Jun 20 09:44:21.255: INFO: Pod "client-can-connect-80-2785c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027708008s Jun 20 09:44:21.255: INFO: Pod "client-can-connect-80-2785c" satisfied condition "completed" Jun 20 09:44:21.255: INFO: Waiting for client-can-connect-80-2785c to complete. Jun 20 09:44:21.255: INFO: Waiting up to 5m0s for pod "client-can-connect-80-2785c" in namespace "e2e-network-policy-9921" to be "Succeeded or Failed" Jun 20 09:44:21.267: INFO: Pod "client-can-connect-80-2785c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.451931ms STEP: Saw pod success 06/20/23 09:44:21.267 Jun 20 09:44:21.267: INFO: Pod "client-can-connect-80-2785c" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-80-2785c 06/20/23 09:44:21.267 STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. 06/20/23 09:44:21.301 W0620 09:44:21.311197 2033 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:44:21.311: INFO: Waiting for client-can-connect-81-88f8h to complete. Jun 20 09:44:21.311: INFO: Waiting up to 3m0s for pod "client-can-connect-81-88f8h" in namespace "e2e-network-policy-9921" to be "completed" Jun 20 09:44:21.473: INFO: Pod "client-can-connect-81-88f8h": Phase="Pending", Reason="", readiness=false. Elapsed: 162.431034ms Jun 20 09:44:23.480: INFO: Pod "client-can-connect-81-88f8h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16932466s Jun 20 09:44:25.485: INFO: Pod "client-can-connect-81-88f8h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174021818s Jun 20 09:44:25.485: INFO: Pod "client-can-connect-81-88f8h" satisfied condition "completed" Jun 20 09:44:25.485: INFO: Waiting for client-can-connect-81-88f8h to complete. Jun 20 09:44:25.485: INFO: Waiting up to 5m0s for pod "client-can-connect-81-88f8h" in namespace "e2e-network-policy-9921" to be "Succeeded or Failed" Jun 20 09:44:25.490: INFO: Pod "client-can-connect-81-88f8h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.867525ms STEP: Saw pod success 06/20/23 09:44:25.49 Jun 20 09:44:25.490: INFO: Pod "client-can-connect-81-88f8h" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-81-88f8h 06/20/23 09:44:25.49 [It] should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1689 STEP: getting the state of the sctp module on nodes 06/20/23 09:44:25.513 Jun 20 09:44:25.521: INFO: Executing cmd "lsmod | grep sctp" on node worker01 W0620 09:44:25.532748 2033 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), privileged (container "agnhost-container" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "rootfs" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:44:25.532: INFO: Waiting up to 5m0s for pod "hostexec-worker01-nzrhl" in namespace "e2e-network-policy-9921" to be "running" Jun 20 09:44:25.539: INFO: Pod "hostexec-worker01-nzrhl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.269556ms Jun 20 09:44:27.547: INFO: Pod "hostexec-worker01-nzrhl": Phase="Running", Reason="", readiness=true. Elapsed: 2.014478339s Jun 20 09:44:27.547: INFO: Pod "hostexec-worker01-nzrhl" satisfied condition "running" Jun 20 09:44:27.547: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:e2e-network-policy-9921 PodName:hostexec-worker01-nzrhl ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 20 09:44:27.548: INFO: ExecWithOptions: Clientset creation Jun 20 09:44:27.548: INFO: ExecWithOptions: execute(POST https://api.ocp1.k8s.work:6443/api/v1/namespaces/e2e-network-policy-9921/pods/hostexec-worker01-nzrhl/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jun 20 09:44:27.683: INFO: exec worker01: command: lsmod | grep sctp Jun 20 09:44:27.683: INFO: exec worker01: stdout: "" Jun 20 09:44:27.683: INFO: exec worker01: stderr: "" Jun 20 09:44:27.683: INFO: exec worker01: exit code: 0 Jun 20 09:44:27.683: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1 Jun 20 09:44:27.683: INFO: the sctp module is not loaded on node: worker01 Jun 20 09:44:27.683: INFO: Executing cmd "lsmod | grep sctp" on node worker02 W0620 09:44:27.696955 2033 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), privileged (container "agnhost-container" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "rootfs" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:44:27.697: INFO: Waiting up to 5m0s for pod "hostexec-worker02-5m8xj" in namespace "e2e-network-policy-9921" to be "running" Jun 20 09:44:27.702: INFO: Pod "hostexec-worker02-5m8xj": Phase="Pending", Reason="", readiness=false. Elapsed: 5.628676ms Jun 20 09:44:29.713: INFO: Pod "hostexec-worker02-5m8xj": Phase="Running", Reason="", readiness=true. Elapsed: 2.016737408s Jun 20 09:44:29.713: INFO: Pod "hostexec-worker02-5m8xj" satisfied condition "running" Jun 20 09:44:29.713: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:e2e-network-policy-9921 PodName:hostexec-worker02-5m8xj ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 20 09:44:29.714: INFO: ExecWithOptions: Clientset creation Jun 20 09:44:29.714: INFO: ExecWithOptions: execute(POST https://api.ocp1.k8s.work:6443/api/v1/namespaces/e2e-network-policy-9921/pods/hostexec-worker02-5m8xj/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jun 20 09:44:29.848: INFO: exec worker02: command: lsmod | grep sctp Jun 20 09:44:29.848: INFO: exec worker02: stdout: "" Jun 20 09:44:29.848: INFO: exec worker02: stderr: "" Jun 20 09:44:29.848: INFO: exec worker02: exit code: 0 Jun 20 09:44:29.848: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1 Jun 20 09:44:29.848: INFO: the sctp module is not loaded on node: worker02 Jun 20 09:44:29.848: INFO: Executing cmd "lsmod | grep sctp" on node worker03 W0620 09:44:29.862832 2033 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), privileged (container "agnhost-container" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "rootfs" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:44:29.862: INFO: Waiting up to 5m0s for pod "hostexec-worker03-lwgf8" in namespace "e2e-network-policy-9921" to be "running" Jun 20 09:44:29.878: INFO: Pod "hostexec-worker03-lwgf8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.75215ms Jun 20 09:44:31.885: INFO: Pod "hostexec-worker03-lwgf8": Phase="Running", Reason="", readiness=true. Elapsed: 2.022193074s Jun 20 09:44:31.885: INFO: Pod "hostexec-worker03-lwgf8" satisfied condition "running" Jun 20 09:44:31.885: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:e2e-network-policy-9921 PodName:hostexec-worker03-lwgf8 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Jun 20 09:44:31.886: INFO: ExecWithOptions: Clientset creation Jun 20 09:44:31.886: INFO: ExecWithOptions: execute(POST https://api.ocp1.k8s.work:6443/api/v1/namespaces/e2e-network-policy-9921/pods/hostexec-worker03-lwgf8/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) Jun 20 09:44:31.998: INFO: exec worker03: command: lsmod | grep sctp Jun 20 09:44:31.998: INFO: exec worker03: stdout: "" Jun 20 09:44:31.998: INFO: exec worker03: stderr: "" Jun 20 09:44:31.998: INFO: exec worker03: exit code: 0 Jun 20 09:44:31.998: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1 Jun 20 09:44:31.998: INFO: the sctp module is not loaded on node: worker03 STEP: Deleting pod hostexec-worker01-nzrhl in namespace e2e-network-policy-9921 06/20/23 09:44:31.998 STEP: Deleting pod hostexec-worker02-5m8xj in namespace e2e-network-policy-9921 06/20/23 09:44:32.019 STEP: Deleting pod hostexec-worker03-lwgf8 in namespace e2e-network-policy-9921 06/20/23 09:44:32.04 STEP: Creating a network policy for the server which allows traffic only via SCTP on port 80. 06/20/23 09:44:32.063 STEP: Testing pods cannot connect on port 80 anymore when not using SCTP as protocol. 06/20/23 09:44:32.074 STEP: Creating client pod client-a that should not be able to connect to svc-server. 06/20/23 09:44:32.074 W0620 09:44:32.086478 2033 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:44:32.086: INFO: Waiting for client-a-v4c66 to complete. Jun 20 09:44:32.086: INFO: Waiting up to 5m0s for pod "client-a-v4c66" in namespace "e2e-network-policy-9921" to be "Succeeded or Failed" Jun 20 09:44:32.095: INFO: Pod "client-a-v4c66": Phase="Pending", Reason="", readiness=false. Elapsed: 8.438477ms Jun 20 09:44:34.102: INFO: Pod "client-a-v4c66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015285833s Jun 20 09:44:36.101: INFO: Pod "client-a-v4c66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015217299s STEP: Saw pod success 06/20/23 09:44:36.101 Jun 20 09:44:36.102: INFO: Pod "client-a-v4c66" satisfied condition "Succeeded or Failed" Jun 20 09:44:36.107: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-9921 describe po client-a-v4c66' Jun 20 09:44:36.246: INFO: stderr: "" Jun 20 09:44:36.246: INFO: stdout: "Name: client-a-v4c66\nNamespace: e2e-network-policy-9921\nPriority: 0\nService Account: default\nNode: worker03/192.168.200.33\nStart Time: Tue, 20 Jun 2023 09:44:32 +0000\nLabels: pod-name=client-a\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::442\",\n \"10.128.9.61\"\n ],\n \"mac\": \"fe:cc:e8:79:fd:16\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::442\",\n \"10.128.9.61\"\n ],\n \"mac\": \"fe:cc:e8:79:fd:16\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Succeeded\nIP: 10.128.9.61\nIPs:\n IP: 10.128.9.61\n IP: fd00::442\nContainers:\n client:\n Container ID: cri-o://672cf8ec11236032854b934b83668a42fc2b3c26132820102d16b8c0e5466ebf\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: \n Host Port: \n Command:\n /bin/sh\n Args:\n -c\n for i in $(seq 1 5); do /agnhost connect 172.30.137.72:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1\n State: Terminated\n Reason: Completed\n Exit Code: 0\n Started: Tue, 20 Jun 2023 09:44:33 +0000\n Finished: Tue, 20 Jun 2023 09:44:33 +0000\n Ready: False\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ccrbl (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-ccrbl:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-network-policy-9921/client-a-v4c66 to worker03 by cp02\n Normal AddedInterface 3s multus Add eth0 [fd00::442/128 10.128.9.61/32] from cilium\n Normal Pulled 3s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 3s kubelet Created container client\n Normal Started 3s kubelet Started container client\n" Jun 20 09:44:36.246: INFO: Output of kubectl describe client-a-v4c66: Name: client-a-v4c66 Namespace: e2e-network-policy-9921 Priority: 0 Service Account: default Node: worker03/192.168.200.33 Start Time: Tue, 20 Jun 2023 09:44:32 +0000 Labels: pod-name=client-a Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::442", "10.128.9.61" ], "mac": "fe:cc:e8:79:fd:16", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::442", "10.128.9.61" ], "mac": "fe:cc:e8:79:fd:16", "default": true, "dns": {} }] Status: Succeeded IP: 10.128.9.61 IPs: IP: 10.128.9.61 IP: fd00::442 Containers: client: Container ID: cri-o://672cf8ec11236032854b934b83668a42fc2b3c26132820102d16b8c0e5466ebf Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: Host Port: Command: /bin/sh Args: -c for i in $(seq 1 5); do /agnhost connect 172.30.137.72:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1 State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 20 Jun 2023 09:44:33 +0000 Finished: Tue, 20 Jun 2023 09:44:33 +0000 Ready: False Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ccrbl (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-ccrbl: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4s default-scheduler Successfully assigned e2e-network-policy-9921/client-a-v4c66 to worker03 by cp02 Normal AddedInterface 3s multus Add eth0 [fd00::442/128 10.128.9.61/32] from cilium Normal Pulled 3s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 3s kubelet Created container client Normal Started 3s kubelet Started container client Jun 20 09:44:36.246: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-9921 logs client-a-v4c66 --tail=100' Jun 20 09:44:36.388: INFO: stderr: "" Jun 20 09:44:36.388: INFO: stdout: "" Jun 20 09:44:36.388: INFO: Last 100 log lines of client-a-v4c66: Jun 20 09:44:36.388: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-9921 describe po server-427dh' Jun 20 09:44:36.530: INFO: stderr: "" Jun 20 09:44:36.530: INFO: stdout: "Name: server-427dh\nNamespace: e2e-network-policy-9921\nPriority: 0\nService Account: default\nNode: worker01/192.168.200.31\nStart Time: Tue, 20 Jun 2023 09:44:13 +0000\nLabels: pod-name=server\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::3ae\",\n \"10.128.7.117\"\n ],\n \"mac\": \"7e:d1:c0:05:97:c8\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::3ae\",\n \"10.128.7.117\"\n ],\n \"mac\": \"7e:d1:c0:05:97:c8\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.7.117\nIPs:\n IP: 10.128.7.117\n IP: fd00::3ae\nContainers:\n server-container-80:\n Container ID: cri-o://ce120f212e834cd9327bd182865cbc1c38dde4eed43a17e7dbba8925afc66fbe\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Tue, 20 Jun 2023 09:44:14 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p9tbn (ro)\n server-container-81:\n Container ID: cri-o://bfadc35ac1ec6a0fb78ba4fccd237f00e5e0c7f05a19675906c4d74dd7489e6f\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 81/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Tue, 20 Jun 2023 09:44:14 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_81: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p9tbn (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-p9tbn:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 23s default-scheduler Successfully assigned e2e-network-policy-9921/server-427dh to worker01 by cp02\n Normal AddedInterface 22s multus Add eth0 [fd00::3ae/128 10.128.7.117/32] from cilium\n Normal Pulled 22s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 22s kubelet Created container server-container-80\n Normal Started 22s kubelet Started container server-container-80\n Normal Pulled 22s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 22s kubelet Created container server-container-81\n Normal Started 22s kubelet Started container server-container-81\n" Jun 20 09:44:36.530: INFO: Output of kubectl describe server-427dh: Name: server-427dh Namespace: e2e-network-policy-9921 Priority: 0 Service Account: default Node: worker01/192.168.200.31 Start Time: Tue, 20 Jun 2023 09:44:13 +0000 Labels: pod-name=server Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::3ae", "10.128.7.117" ], "mac": "7e:d1:c0:05:97:c8", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::3ae", "10.128.7.117" ], "mac": "7e:d1:c0:05:97:c8", "default": true, "dns": {} }] Status: Running IP: 10.128.7.117 IPs: IP: 10.128.7.117 IP: fd00::3ae Containers: server-container-80: Container ID: cri-o://ce120f212e834cd9327bd182865cbc1c38dde4eed43a17e7dbba8925afc66fbe Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 80/TCP Host Port: 0/TCP Args: porter State: Running Started: Tue, 20 Jun 2023 09:44:14 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_80: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p9tbn (ro) server-container-81: Container ID: cri-o://bfadc35ac1ec6a0fb78ba4fccd237f00e5e0c7f05a19675906c4d74dd7489e6f Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 81/TCP Host Port: 0/TCP Args: porter State: Running Started: Tue, 20 Jun 2023 09:44:14 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_81: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p9tbn (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-p9tbn: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 23s default-scheduler Successfully assigned e2e-network-policy-9921/server-427dh to worker01 by cp02 Normal AddedInterface 22s multus Add eth0 [fd00::3ae/128 10.128.7.117/32] from cilium Normal Pulled 22s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 22s kubelet Created container server-container-80 Normal Started 22s kubelet Started container server-container-80 Normal Pulled 22s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 22s kubelet Created container server-container-81 Normal Started 22s kubelet Started container server-container-81 Jun 20 09:44:36.530: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-9921 logs server-427dh --tail=100' Jun 20 09:44:36.685: INFO: stderr: "Defaulted container \"server-container-80\" out of: server-container-80, server-container-81\n" Jun 20 09:44:36.685: INFO: stdout: "" Jun 20 09:44:36.685: INFO: Last 100 log lines of server-427dh: Jun 20 09:44:36.706: FAIL: Pod client-a-v4c66 should not be able to connect to service svc-server, but was able to connect. Pod logs: Current NetworkPolicies: [{{ } {allow-only-sctp-ingress-on-port-80 e2e-network-policy-9921 37430590-57f0-4b98-b85a-4315848cde4a 42379 1 2023-06-20 09:44:32 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-20 09:44:32 +0000 UTC FieldsV1 {"f:spec":{"f:ingress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:server] []} [{[{0xc0023843a0 80 }] []}] [] [Ingress]} {[]}}] Pods: [Pod: client-a-v4c66, Status: &PodStatus{Phase:Succeeded,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:32 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:32 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:32 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.9.61,StartTime:2023-06-20 09:44:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-20 09:44:33 +0000 UTC,FinishedAt:2023-06-20 09:44:33 +0000 UTC,ContainerID:cri-o://672cf8ec11236032854b934b83668a42fc2b3c26132820102d16b8c0e5466ebf,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://672cf8ec11236032854b934b83668a42fc2b3c26132820102d16b8c0e5466ebf,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.9.61,},PodIP{IP:fd00::442,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-427dh, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.7.117,StartTime:2023-06-20 09:44:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-20 09:44:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://ce120f212e834cd9327bd182865cbc1c38dde4eed43a17e7dbba8925afc66fbe,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-20 09:44:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://bfadc35ac1ec6a0fb78ba4fccd237f00e5e0c7f05a19675906c4d74dd7489e6f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.7.117,},PodIP{IP:fd00::3ae,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Full Stack Trace k8s.io/kubernetes/test/e2e/network/netpol.checkNoConnectivity(0xc000feca50, 0xc001ab0580, 0xc00774a480, 0xc0072a6500) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1957 +0x25a k8s.io/kubernetes/test/e2e/network/netpol.testCannotConnectProtocol(0xc000feca50, 0xc001ab0580, {0x8a33123, 0x8}, 0xc0072a6500, 0x0?, {0x8a24aec, 0x3}) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1926 +0x1be k8s.io/kubernetes/test/e2e/network/netpol.testCannotConnect(...) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1901 k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.31() k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1721 +0x3d3 github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2e8b77e, 0xc001d32a80}) github.com/onsi/ginkgo/v2@v2.4.0/internal/node.go:449 +0x1b github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:757 +0x98 created by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:745 +0xe3d STEP: Cleaning up the pod client-a-v4c66 06/20/23 09:44:36.706 STEP: Cleaning up the policy. 06/20/23 09:44:36.753 [AfterEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:96 STEP: Cleaning up the server. 06/20/23 09:44:36.775 STEP: Cleaning up the server's service. 06/20/23 09:44:36.795 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] dump namespaces | framework.go:196 STEP: dump namespace information after failure 06/20/23 09:44:36.859 STEP: Collecting events from namespace "e2e-network-policy-9921". 06/20/23 09:44:36.859 STEP: Found 40 events. 06/20/23 09:44:36.867 Jun 20 09:44:36.867: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-v4c66: { } Scheduled: Successfully assigned e2e-network-policy-9921/client-a-v4c66 to worker03 by cp02 Jun 20 09:44:36.867: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-2785c: { } Scheduled: Successfully assigned e2e-network-policy-9921/client-can-connect-80-2785c to worker02 by cp02 Jun 20 09:44:36.867: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-88f8h: { } Scheduled: Successfully assigned e2e-network-policy-9921/client-can-connect-81-88f8h to worker02 by cp02 Jun 20 09:44:36.867: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-worker01-nzrhl: { } Scheduled: Successfully assigned e2e-network-policy-9921/hostexec-worker01-nzrhl to worker01 by cp02 Jun 20 09:44:36.867: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-worker02-5m8xj: { } Scheduled: Successfully assigned e2e-network-policy-9921/hostexec-worker02-5m8xj to worker02 by cp02 Jun 20 09:44:36.867: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-worker03-lwgf8: { } Scheduled: Successfully assigned e2e-network-policy-9921/hostexec-worker03-lwgf8 to worker03 by cp02 Jun 20 09:44:36.867: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-427dh: { } Scheduled: Successfully assigned e2e-network-policy-9921/server-427dh to worker01 by cp02 Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:14 +0000 UTC - event for server-427dh: {multus } AddedInterface: Add eth0 [fd00::3ae/128 10.128.7.117/32] from cilium Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:14 +0000 UTC - event for server-427dh: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:14 +0000 UTC - event for server-427dh: {kubelet worker01} Started: Started container server-container-80 Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:14 +0000 UTC - event for server-427dh: {kubelet worker01} Created: Created container server-container-80 Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:14 +0000 UTC - event for server-427dh: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:14 +0000 UTC - event for server-427dh: {kubelet worker01} Created: Created container server-container-81 Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:14 +0000 UTC - event for server-427dh: {kubelet worker01} Started: Started container server-container-81 Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:18 +0000 UTC - event for client-can-connect-80-2785c: {kubelet worker02} Created: Created container client Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:18 +0000 UTC - event for client-can-connect-80-2785c: {kubelet worker02} Started: Started container client Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:18 +0000 UTC - event for client-can-connect-80-2785c: {multus } AddedInterface: Add eth0 [fd00::524/128 10.128.11.10/32] from cilium Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:18 +0000 UTC - event for client-can-connect-80-2785c: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:22 +0000 UTC - event for client-can-connect-81-88f8h: {kubelet worker02} Created: Created container client Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:22 +0000 UTC - event for client-can-connect-81-88f8h: {kubelet worker02} Started: Started container client Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:22 +0000 UTC - event for client-can-connect-81-88f8h: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:22 +0000 UTC - event for client-can-connect-81-88f8h: {multus } AddedInterface: Add eth0 [fd00::5e3/128 10.128.10.207/32] from cilium Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:25 +0000 UTC - event for hostexec-worker01-nzrhl: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:26 +0000 UTC - event for hostexec-worker01-nzrhl: {kubelet worker01} Started: Started container agnhost-container Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:26 +0000 UTC - event for hostexec-worker01-nzrhl: {kubelet worker01} Created: Created container agnhost-container Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:28 +0000 UTC - event for hostexec-worker02-5m8xj: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:28 +0000 UTC - event for hostexec-worker02-5m8xj: {kubelet worker02} Started: Started container agnhost-container Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:28 +0000 UTC - event for hostexec-worker02-5m8xj: {kubelet worker02} Created: Created container agnhost-container Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:30 +0000 UTC - event for hostexec-worker03-lwgf8: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:30 +0000 UTC - event for hostexec-worker03-lwgf8: {kubelet worker03} Created: Created container agnhost-container Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:30 +0000 UTC - event for hostexec-worker03-lwgf8: {kubelet worker03} Started: Started container agnhost-container Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:32 +0000 UTC - event for hostexec-worker01-nzrhl: {kubelet worker01} Killing: Stopping container agnhost-container Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:32 +0000 UTC - event for hostexec-worker02-5m8xj: {kubelet worker02} Killing: Stopping container agnhost-container Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:32 +0000 UTC - event for hostexec-worker03-lwgf8: {kubelet worker03} Killing: Stopping container agnhost-container Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:33 +0000 UTC - event for client-a-v4c66: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:33 +0000 UTC - event for client-a-v4c66: {kubelet worker03} Created: Created container client Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:33 +0000 UTC - event for client-a-v4c66: {kubelet worker03} Started: Started container client Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:33 +0000 UTC - event for client-a-v4c66: {multus } AddedInterface: Add eth0 [fd00::442/128 10.128.9.61/32] from cilium Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:36 +0000 UTC - event for server-427dh: {kubelet worker01} Killing: Stopping container server-container-80 Jun 20 09:44:36.867: INFO: At 2023-06-20 09:44:36 +0000 UTC - event for server-427dh: {kubelet worker01} Killing: Stopping container server-container-81 Jun 20 09:44:36.873: INFO: POD NODE PHASE GRACE CONDITIONS Jun 20 09:44:36.873: INFO: server-427dh worker01 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:44:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:44:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:44:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:44:13 +0000 UTC }] Jun 20 09:44:36.873: INFO: Jun 20 09:44:36.879: INFO: skipping dumping cluster info - cluster too large [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] tear down framework | framework.go:193 STEP: Destroying namespace "e2e-network-policy-9921" for this suite. 06/20/23 09:44:36.879 fail [k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1957]: Jun 20 09:44:36.706: Pod client-a-v4c66 should not be able to connect to service svc-server, but was able to connect. Pod logs: Current NetworkPolicies: [{{ } {allow-only-sctp-ingress-on-port-80 e2e-network-policy-9921 37430590-57f0-4b98-b85a-4315848cde4a 42379 1 2023-06-20 09:44:32 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-20 09:44:32 +0000 UTC FieldsV1 {"f:spec":{"f:ingress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:server] []} [{[{0xc0023843a0 80 }] []}] [] [Ingress]} {[]}}] Pods: [Pod: client-a-v4c66, Status: &PodStatus{Phase:Succeeded,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:32 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:32 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:32 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.9.61,StartTime:2023-06-20 09:44:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-06-20 09:44:33 +0000 UTC,FinishedAt:2023-06-20 09:44:33 +0000 UTC,ContainerID:cri-o://672cf8ec11236032854b934b83668a42fc2b3c26132820102d16b8c0e5466ebf,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://672cf8ec11236032854b934b83668a42fc2b3c26132820102d16b8c0e5466ebf,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.9.61,},PodIP{IP:fd00::442,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-427dh, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.7.117,StartTime:2023-06-20 09:44:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-20 09:44:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://ce120f212e834cd9327bd182865cbc1c38dde4eed43a17e7dbba8925afc66fbe,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-20 09:44:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://bfadc35ac1ec6a0fb78ba4fccd237f00e5e0c7f05a19675906c4d74dd7489e6f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.7.117,},PodIP{IP:fd00::3ae,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Ginkgo exit error 1: exit with code 1 failed: (25s) 2023-06-20T09:44:36 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/38/67 "[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (2m30s) 2023-06-20T09:44:42 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/39/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" Jun 20 09:42:51.189: INFO: Enabling in-tree volume drivers [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/test.go:58 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] set up framework | framework.go:178 STEP: Creating a kubernetes client 06/20/23 09:42:52.02 STEP: Building a namespace api object, basename network-policy 06/20/23 09:42:52.021 Jun 20 09:42:52.073: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace 06/20/23 09:42:52.19 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/20/23 09:42:52.195 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:72 [BeforeEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:78 STEP: Creating a simple server that serves on port 80 and 81. 06/20/23 09:42:52.199 STEP: Creating a server pod server in namespace e2e-network-policy-9382 06/20/23 09:42:52.2 W0620 09:42:52.223900 1129 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "server-container-80", "server-container-81" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "server-container-80", "server-container-81" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "server-container-80", "server-container-81" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "server-container-80", "server-container-81" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:42:52.224: INFO: Created pod server-qvbwf STEP: Creating a service svc-server for pod server in namespace e2e-network-policy-9382 06/20/23 09:42:52.224 Jun 20 09:42:52.274: INFO: Created service svc-server STEP: Waiting for pod ready 06/20/23 09:42:52.274 Jun 20 09:42:52.274: INFO: Waiting up to 5m0s for pod "server-qvbwf" in namespace "e2e-network-policy-9382" to be "running and ready" Jun 20 09:42:52.281: INFO: Pod "server-qvbwf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.05954ms Jun 20 09:42:52.281: INFO: The phase of Pod server-qvbwf is Pending, waiting for it to be Running (with Ready = true) Jun 20 09:42:54.288: INFO: Pod "server-qvbwf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014226893s Jun 20 09:42:54.288: INFO: The phase of Pod server-qvbwf is Pending, waiting for it to be Running (with Ready = true) Jun 20 09:42:56.287: INFO: Pod "server-qvbwf": Phase="Running", Reason="", readiness=true. Elapsed: 4.012798s Jun 20 09:42:56.287: INFO: The phase of Pod server-qvbwf is Running (Ready = true) Jun 20 09:42:56.287: INFO: Pod "server-qvbwf" satisfied condition "running and ready" STEP: Testing pods can connect to both ports when no policy is present. 06/20/23 09:42:56.287 STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. 06/20/23 09:42:56.287 W0620 09:42:56.297965 1129 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:42:56.298: INFO: Waiting for client-can-connect-80-mw48z to complete. Jun 20 09:42:56.298: INFO: Waiting up to 3m0s for pod "client-can-connect-80-mw48z" in namespace "e2e-network-policy-9382" to be "completed" Jun 20 09:42:56.304: INFO: Pod "client-can-connect-80-mw48z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035611ms Jun 20 09:42:58.310: INFO: Pod "client-can-connect-80-mw48z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012688836s Jun 20 09:43:00.312: INFO: Pod "client-can-connect-80-mw48z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01419775s Jun 20 09:43:02.310: INFO: Pod "client-can-connect-80-mw48z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0122174s Jun 20 09:43:02.310: INFO: Pod "client-can-connect-80-mw48z" satisfied condition "completed" Jun 20 09:43:02.310: INFO: Waiting for client-can-connect-80-mw48z to complete. Jun 20 09:43:02.310: INFO: Waiting up to 5m0s for pod "client-can-connect-80-mw48z" in namespace "e2e-network-policy-9382" to be "Succeeded or Failed" Jun 20 09:43:02.316: INFO: Pod "client-can-connect-80-mw48z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.989912ms STEP: Saw pod success 06/20/23 09:43:02.316 Jun 20 09:43:02.316: INFO: Pod "client-can-connect-80-mw48z" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-80-mw48z 06/20/23 09:43:02.316 STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. 06/20/23 09:43:02.335 W0620 09:43:02.348396 1129 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:43:02.348: INFO: Waiting for client-can-connect-81-jxvl4 to complete. Jun 20 09:43:02.348: INFO: Waiting up to 3m0s for pod "client-can-connect-81-jxvl4" in namespace "e2e-network-policy-9382" to be "completed" Jun 20 09:43:02.353: INFO: Pod "client-can-connect-81-jxvl4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.915753ms Jun 20 09:43:04.362: INFO: Pod "client-can-connect-81-jxvl4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014419207s Jun 20 09:43:06.358: INFO: Pod "client-can-connect-81-jxvl4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009681464s Jun 20 09:43:06.358: INFO: Pod "client-can-connect-81-jxvl4" satisfied condition "completed" Jun 20 09:43:06.358: INFO: Waiting for client-can-connect-81-jxvl4 to complete. Jun 20 09:43:06.358: INFO: Waiting up to 5m0s for pod "client-can-connect-81-jxvl4" in namespace "e2e-network-policy-9382" to be "Succeeded or Failed" Jun 20 09:43:06.367: INFO: Pod "client-can-connect-81-jxvl4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.70556ms STEP: Saw pod success 06/20/23 09:43:06.368 Jun 20 09:43:06.368: INFO: Pod "client-can-connect-81-jxvl4" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-81-jxvl4 06/20/23 09:43:06.368 [It] should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1477 STEP: Creating client-a which should not be able to contact the server. 06/20/23 09:43:06.409 STEP: Creating client pod client-a that should not be able to connect to svc-server. 06/20/23 09:43:06.409 W0620 09:43:06.423642 1129 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:43:06.423: INFO: Waiting for client-a-2mlnx to complete. Jun 20 09:43:06.423: INFO: Waiting up to 5m0s for pod "client-a-2mlnx" in namespace "e2e-network-policy-9382" to be "Succeeded or Failed" Jun 20 09:43:06.428: INFO: Pod "client-a-2mlnx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.699921ms Jun 20 09:43:08.435: INFO: Pod "client-a-2mlnx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011770249s Jun 20 09:43:10.436: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 4.012737402s Jun 20 09:43:12.434: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 6.011087992s Jun 20 09:43:14.435: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 8.01193403s Jun 20 09:43:16.434: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 10.010327837s Jun 20 09:43:18.434: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 12.011106827s Jun 20 09:43:20.434: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 14.010828214s Jun 20 09:43:22.434: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 16.01053815s Jun 20 09:43:24.439: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 18.015404972s Jun 20 09:43:26.433: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 20.010118861s Jun 20 09:43:28.434: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 22.010280396s Jun 20 09:43:30.435: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 24.011285597s Jun 20 09:43:32.435: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 26.011277928s Jun 20 09:43:34.437: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 28.013746884s Jun 20 09:43:36.436: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 30.013128729s Jun 20 09:43:38.435: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 32.01158183s Jun 20 09:43:40.439: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 34.015503004s Jun 20 09:43:42.434: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 36.010405901s Jun 20 09:43:44.437: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 38.01377871s Jun 20 09:43:46.433: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 40.009988772s Jun 20 09:43:48.435: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 42.011487416s Jun 20 09:43:50.436: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 44.012343373s Jun 20 09:43:52.437: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=true. Elapsed: 46.014123989s Jun 20 09:43:54.435: INFO: Pod "client-a-2mlnx": Phase="Running", Reason="", readiness=false. Elapsed: 48.011800171s Jun 20 09:43:56.434: INFO: Pod "client-a-2mlnx": Phase="Failed", Reason="", readiness=false. Elapsed: 50.010999178s STEP: Cleaning up the pod client-a-2mlnx 06/20/23 09:43:56.435 STEP: Creating client-a which should now be able to contact the server. 06/20/23 09:43:56.491 STEP: Creating client pod client-a that should successfully connect to svc-server. 06/20/23 09:43:56.491 W0620 09:43:56.512492 1129 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:43:56.512: INFO: Waiting for client-a-8s68c to complete. Jun 20 09:43:56.512: INFO: Waiting up to 3m0s for pod "client-a-8s68c" in namespace "e2e-network-policy-9382" to be "completed" Jun 20 09:43:56.526: INFO: Pod "client-a-8s68c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.234896ms Jun 20 09:43:58.533: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 2.021285209s Jun 20 09:44:00.537: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 4.024700319s Jun 20 09:44:02.533: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 6.020638274s Jun 20 09:44:04.534: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 8.021456243s Jun 20 09:44:06.533: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 10.021250237s Jun 20 09:44:08.535: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 12.022525734s Jun 20 09:44:10.537: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 14.024768896s Jun 20 09:44:12.534: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 16.021685438s Jun 20 09:44:14.533: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 18.02058996s Jun 20 09:44:16.533: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 20.020884684s Jun 20 09:44:18.533: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 22.021241041s Jun 20 09:44:20.538: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 24.026026145s Jun 20 09:44:22.533: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 26.021132753s Jun 20 09:44:24.551: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 28.038919697s Jun 20 09:44:26.532: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 30.019516013s Jun 20 09:44:28.532: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 32.020189849s Jun 20 09:44:30.534: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 34.022011102s Jun 20 09:44:32.533: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 36.020867129s Jun 20 09:44:34.537: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 38.024477128s Jun 20 09:44:36.531: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 40.018921018s Jun 20 09:44:38.541: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 42.028600569s Jun 20 09:44:40.545: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 44.032684281s Jun 20 09:44:42.534: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=true. Elapsed: 46.021565076s Jun 20 09:44:44.534: INFO: Pod "client-a-8s68c": Phase="Running", Reason="", readiness=false. Elapsed: 48.021819084s Jun 20 09:44:46.535: INFO: Pod "client-a-8s68c": Phase="Failed", Reason="", readiness=false. Elapsed: 50.022561851s Jun 20 09:44:46.535: INFO: Pod "client-a-8s68c" satisfied condition "completed" Jun 20 09:44:46.535: INFO: Waiting for client-a-8s68c to complete. Jun 20 09:44:46.535: INFO: Waiting up to 5m0s for pod "client-a-8s68c" in namespace "e2e-network-policy-9382" to be "Succeeded or Failed" Jun 20 09:44:46.541: INFO: Pod "client-a-8s68c": Phase="Failed", Reason="", readiness=false. Elapsed: 6.522579ms Jun 20 09:44:46.546: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-9382 describe po client-a-8s68c' Jun 20 09:44:46.714: INFO: stderr: "" Jun 20 09:44:46.714: INFO: stdout: "Name: client-a-8s68c\nNamespace: e2e-network-policy-9382\nPriority: 0\nService Account: default\nNode: worker03/192.168.200.33\nStart Time: Tue, 20 Jun 2023 09:43:56 +0000\nLabels: pod-name=client-a\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::4a3\",\n \"10.128.8.24\"\n ],\n \"mac\": \"7e:b2:eb:07:78:88\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::4a3\",\n \"10.128.8.24\"\n ],\n \"mac\": \"7e:b2:eb:07:78:88\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Failed\nIP: 10.128.8.24\nIPs:\n IP: 10.128.8.24\n IP: fd00::4a3\nContainers:\n client:\n Container ID: cri-o://ef65a57d2b3421e1de08b9c05d5e52eefc2893f23fca8a02139686da3c700e1b\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: \n Host Port: \n Command:\n /bin/sh\n Args:\n -c\n for i in $(seq 1 5); do /agnhost connect 172.30.78.144:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1\n State: Terminated\n Reason: Error\n Exit Code: 1\n Started: Tue, 20 Jun 2023 09:43:57 +0000\n Finished: Tue, 20 Jun 2023 09:44:43 +0000\n Ready: False\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w8d6j (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-w8d6j:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-9382/client-a-8s68c to worker03 by cp02\n Normal AddedInterface 49s multus Add eth0 [fd00::4a3/128 10.128.8.24/32] from cilium\n Normal Pulled 49s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 49s kubelet Created container client\n Normal Started 49s kubelet Started container client\n" Jun 20 09:44:46.714: INFO: Output of kubectl describe client-a-8s68c: Name: client-a-8s68c Namespace: e2e-network-policy-9382 Priority: 0 Service Account: default Node: worker03/192.168.200.33 Start Time: Tue, 20 Jun 2023 09:43:56 +0000 Labels: pod-name=client-a Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::4a3", "10.128.8.24" ], "mac": "7e:b2:eb:07:78:88", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::4a3", "10.128.8.24" ], "mac": "7e:b2:eb:07:78:88", "default": true, "dns": {} }] Status: Failed IP: 10.128.8.24 IPs: IP: 10.128.8.24 IP: fd00::4a3 Containers: client: Container ID: cri-o://ef65a57d2b3421e1de08b9c05d5e52eefc2893f23fca8a02139686da3c700e1b Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: Host Port: Command: /bin/sh Args: -c for i in $(seq 1 5); do /agnhost connect 172.30.78.144:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1 State: Terminated Reason: Error Exit Code: 1 Started: Tue, 20 Jun 2023 09:43:57 +0000 Finished: Tue, 20 Jun 2023 09:44:43 +0000 Ready: False Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w8d6j (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-w8d6j: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-9382/client-a-8s68c to worker03 by cp02 Normal AddedInterface 49s multus Add eth0 [fd00::4a3/128 10.128.8.24/32] from cilium Normal Pulled 49s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 49s kubelet Created container client Normal Started 49s kubelet Started container client Jun 20 09:44:46.714: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-9382 logs client-a-8s68c --tail=100' Jun 20 09:44:46.884: INFO: stderr: "" Jun 20 09:44:46.884: INFO: stdout: "TIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\n" Jun 20 09:44:46.884: INFO: Last 100 log lines of client-a-8s68c: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Jun 20 09:44:46.884: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-9382 describe po server-qvbwf' Jun 20 09:44:47.048: INFO: stderr: "" Jun 20 09:44:47.048: INFO: stdout: "Name: server-qvbwf\nNamespace: e2e-network-policy-9382\nPriority: 0\nService Account: default\nNode: worker03/192.168.200.33\nStart Time: Tue, 20 Jun 2023 09:42:52 +0000\nLabels: pod-name=server\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::405\",\n \"10.128.8.222\"\n ],\n \"mac\": \"ba:f5:5e:3a:07:d7\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::405\",\n \"10.128.8.222\"\n ],\n \"mac\": \"ba:f5:5e:3a:07:d7\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.8.222\nIPs:\n IP: 10.128.8.222\n IP: fd00::405\nContainers:\n server-container-80:\n Container ID: cri-o://945572033aa21e456d8816e3842fc529f9cfe82bfd6696116f83e9850b29c2f5\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Tue, 20 Jun 2023 09:42:53 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2sctv (ro)\n server-container-81:\n Container ID: cri-o://5b863fd6602918f601a6e1c732b4ba1e4e8ada7afed42377f928f2d096b15409\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 81/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Tue, 20 Jun 2023 09:42:54 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_81: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2sctv (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-2sctv:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 114s default-scheduler Successfully assigned e2e-network-policy-9382/server-qvbwf to worker03 by cp02\n Normal AddedInterface 114s multus Add eth0 [fd00::405/128 10.128.8.222/32] from cilium\n Normal Pulled 114s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 114s kubelet Created container server-container-80\n Normal Started 114s kubelet Started container server-container-80\n Normal Pulled 114s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 113s kubelet Created container server-container-81\n Normal Started 113s kubelet Started container server-container-81\n" Jun 20 09:44:47.048: INFO: Output of kubectl describe server-qvbwf: Name: server-qvbwf Namespace: e2e-network-policy-9382 Priority: 0 Service Account: default Node: worker03/192.168.200.33 Start Time: Tue, 20 Jun 2023 09:42:52 +0000 Labels: pod-name=server Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::405", "10.128.8.222" ], "mac": "ba:f5:5e:3a:07:d7", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::405", "10.128.8.222" ], "mac": "ba:f5:5e:3a:07:d7", "default": true, "dns": {} }] Status: Running IP: 10.128.8.222 IPs: IP: 10.128.8.222 IP: fd00::405 Containers: server-container-80: Container ID: cri-o://945572033aa21e456d8816e3842fc529f9cfe82bfd6696116f83e9850b29c2f5 Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 80/TCP Host Port: 0/TCP Args: porter State: Running Started: Tue, 20 Jun 2023 09:42:53 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_80: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2sctv (ro) server-container-81: Container ID: cri-o://5b863fd6602918f601a6e1c732b4ba1e4e8ada7afed42377f928f2d096b15409 Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 81/TCP Host Port: 0/TCP Args: porter State: Running Started: Tue, 20 Jun 2023 09:42:54 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_81: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2sctv (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-2sctv: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 114s default-scheduler Successfully assigned e2e-network-policy-9382/server-qvbwf to worker03 by cp02 Normal AddedInterface 114s multus Add eth0 [fd00::405/128 10.128.8.222/32] from cilium Normal Pulled 114s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 114s kubelet Created container server-container-80 Normal Started 114s kubelet Started container server-container-80 Normal Pulled 114s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 113s kubelet Created container server-container-81 Normal Started 113s kubelet Started container server-container-81 Jun 20 09:44:47.048: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-9382 logs server-qvbwf --tail=100' Jun 20 09:44:47.207: INFO: stderr: "Defaulted container \"server-container-80\" out of: server-container-80, server-container-81\n" Jun 20 09:44:47.207: INFO: stdout: "" Jun 20 09:44:47.207: INFO: Last 100 log lines of server-qvbwf: Jun 20 09:44:47.232: FAIL: Pod client-a-8s68c should be able to connect to service svc-server, but was not able to connect. Pod logs: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Current NetworkPolicies: [{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-9382 1a16a246-cc95-4d4c-b1de-0ab7cf7223a3 40807 1 2023-06-20 09:43:56 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-20 09:43:56 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.8.222/32,Except:[],}}]}] [Egress]} {[]}} {{ } {deny-client-a-via-except-cidr-egress-rule e2e-network-policy-9382 767db56d-4453-4fc3-8e2b-092543b8602d 39343 1 2023-06-20 09:43:06 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-20 09:43:06 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.8.0/24,Except:[10.128.8.222/32],}}]}] [Egress]} {[]}}] Pods: [Pod: client-a-8s68c, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:43:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:43 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:43 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:43:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.8.24,StartTime:2023-06-20 09:43:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-06-20 09:43:57 +0000 UTC,FinishedAt:2023-06-20 09:44:43 +0000 UTC,ContainerID:cri-o://ef65a57d2b3421e1de08b9c05d5e52eefc2893f23fca8a02139686da3c700e1b,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://ef65a57d2b3421e1de08b9c05d5e52eefc2893f23fca8a02139686da3c700e1b,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.24,},PodIP{IP:fd00::4a3,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-qvbwf, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:42:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:42:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:42:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:42:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.8.222,StartTime:2023-06-20 09:42:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-20 09:42:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://945572033aa21e456d8816e3842fc529f9cfe82bfd6696116f83e9850b29c2f5,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-20 09:42:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://5b863fd6602918f601a6e1c732b4ba1e4e8ada7afed42377f928f2d096b15409,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.222,},PodIP{IP:fd00::405,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Full Stack Trace k8s.io/kubernetes/test/e2e/network/netpol.checkConnectivity(0xc001f7ab40, 0xc0017d31e0, 0xc0030ead80, 0xc0021dea00) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941 +0x355 k8s.io/kubernetes/test/e2e/network/netpol.testCanConnectProtocol(0xc001f7ab40, 0xc0017d31e0, {0x8a33123, 0x8}, 0xc0021dea00, 0xc001c67e60?, {0x8a24aec, 0x3}) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1913 +0x1be k8s.io/kubernetes/test/e2e/network/netpol.testCanConnect(...) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1897 k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.29.2() k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1569 +0x47 github.com/onsi/ginkgo/v2.By({0x8c200aa, 0x41}, {0xc002693e50, 0x1, 0x0?}) github.com/onsi/ginkgo/v2@v2.4.0/core_dsl.go:535 +0x525 k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.29() k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1568 +0xb5b github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc0021d7a40, 0x0}) github.com/onsi/ginkgo/v2@v2.4.0/internal/node.go:449 +0x1b github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:757 +0x98 created by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:745 +0xe3d STEP: Cleaning up the pod client-a-8s68c 06/20/23 09:44:47.232 STEP: Cleaning up the policy. 06/20/23 09:44:47.258 [AfterEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:96 STEP: Cleaning up the server. 06/20/23 09:44:47.268 STEP: Cleaning up the server's service. 06/20/23 09:44:47.288 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] dump namespaces | framework.go:196 STEP: dump namespace information after failure 06/20/23 09:44:47.344 STEP: Collecting events from namespace "e2e-network-policy-9382". 06/20/23 09:44:47.345 STEP: Found 30 events. 06/20/23 09:44:47.35 Jun 20 09:44:47.350: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-2mlnx: { } Scheduled: Successfully assigned e2e-network-policy-9382/client-a-2mlnx to worker01 by cp02 Jun 20 09:44:47.350: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-8s68c: { } Scheduled: Successfully assigned e2e-network-policy-9382/client-a-8s68c to worker03 by cp02 Jun 20 09:44:47.350: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-mw48z: { } Scheduled: Successfully assigned e2e-network-policy-9382/client-can-connect-80-mw48z to worker03 by cp02 Jun 20 09:44:47.350: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-jxvl4: { } Scheduled: Successfully assigned e2e-network-policy-9382/client-can-connect-81-jxvl4 to worker03 by cp02 Jun 20 09:44:47.350: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-qvbwf: { } Scheduled: Successfully assigned e2e-network-policy-9382/server-qvbwf to worker03 by cp02 Jun 20 09:44:47.350: INFO: At 2023-06-20 09:42:53 +0000 UTC - event for server-qvbwf: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:44:47.350: INFO: At 2023-06-20 09:42:53 +0000 UTC - event for server-qvbwf: {kubelet worker03} Created: Created container server-container-80 Jun 20 09:44:47.350: INFO: At 2023-06-20 09:42:53 +0000 UTC - event for server-qvbwf: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:44:47.350: INFO: At 2023-06-20 09:42:53 +0000 UTC - event for server-qvbwf: {multus } AddedInterface: Add eth0 [fd00::405/128 10.128.8.222/32] from cilium Jun 20 09:44:47.350: INFO: At 2023-06-20 09:42:53 +0000 UTC - event for server-qvbwf: {kubelet worker03} Started: Started container server-container-80 Jun 20 09:44:47.350: INFO: At 2023-06-20 09:42:54 +0000 UTC - event for server-qvbwf: {kubelet worker03} Created: Created container server-container-81 Jun 20 09:44:47.350: INFO: At 2023-06-20 09:42:54 +0000 UTC - event for server-qvbwf: {kubelet worker03} Started: Started container server-container-81 Jun 20 09:44:47.350: INFO: At 2023-06-20 09:42:57 +0000 UTC - event for client-can-connect-80-mw48z: {multus } AddedInterface: Add eth0 [fd00::495/128 10.128.9.72/32] from cilium Jun 20 09:44:47.350: INFO: At 2023-06-20 09:42:57 +0000 UTC - event for client-can-connect-80-mw48z: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:44:47.350: INFO: At 2023-06-20 09:42:58 +0000 UTC - event for client-can-connect-80-mw48z: {kubelet worker03} Started: Started container client Jun 20 09:44:47.350: INFO: At 2023-06-20 09:42:58 +0000 UTC - event for client-can-connect-80-mw48z: {kubelet worker03} Created: Created container client Jun 20 09:44:47.350: INFO: At 2023-06-20 09:43:03 +0000 UTC - event for client-can-connect-81-jxvl4: {multus } AddedInterface: Add eth0 [fd00::4ca/128 10.128.8.186/32] from cilium Jun 20 09:44:47.350: INFO: At 2023-06-20 09:43:03 +0000 UTC - event for client-can-connect-81-jxvl4: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:44:47.350: INFO: At 2023-06-20 09:43:03 +0000 UTC - event for client-can-connect-81-jxvl4: {kubelet worker03} Created: Created container client Jun 20 09:44:47.350: INFO: At 2023-06-20 09:43:03 +0000 UTC - event for client-can-connect-81-jxvl4: {kubelet worker03} Started: Started container client Jun 20 09:44:47.350: INFO: At 2023-06-20 09:43:07 +0000 UTC - event for client-a-2mlnx: {kubelet worker01} Started: Started container client Jun 20 09:44:47.350: INFO: At 2023-06-20 09:43:07 +0000 UTC - event for client-a-2mlnx: {kubelet worker01} Created: Created container client Jun 20 09:44:47.350: INFO: At 2023-06-20 09:43:07 +0000 UTC - event for client-a-2mlnx: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:44:47.350: INFO: At 2023-06-20 09:43:07 +0000 UTC - event for client-a-2mlnx: {multus } AddedInterface: Add eth0 [fd00::3fa/128 10.128.7.177/32] from cilium Jun 20 09:44:47.350: INFO: At 2023-06-20 09:43:57 +0000 UTC - event for client-a-8s68c: {kubelet worker03} Created: Created container client Jun 20 09:44:47.350: INFO: At 2023-06-20 09:43:57 +0000 UTC - event for client-a-8s68c: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:44:47.350: INFO: At 2023-06-20 09:43:57 +0000 UTC - event for client-a-8s68c: {multus } AddedInterface: Add eth0 [fd00::4a3/128 10.128.8.24/32] from cilium Jun 20 09:44:47.350: INFO: At 2023-06-20 09:43:57 +0000 UTC - event for client-a-8s68c: {kubelet worker03} Started: Started container client Jun 20 09:44:47.350: INFO: At 2023-06-20 09:44:47 +0000 UTC - event for server-qvbwf: {kubelet worker03} Killing: Stopping container server-container-80 Jun 20 09:44:47.351: INFO: At 2023-06-20 09:44:47 +0000 UTC - event for server-qvbwf: {kubelet worker03} Killing: Stopping container server-container-81 Jun 20 09:44:47.355: INFO: POD NODE PHASE GRACE CONDITIONS Jun 20 09:44:47.355: INFO: server-qvbwf worker03 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:42:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:42:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:42:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:42:52 +0000 UTC }] Jun 20 09:44:47.355: INFO: Jun 20 09:44:47.364: INFO: skipping dumping cluster info - cluster too large [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] tear down framework | framework.go:193 STEP: Destroying namespace "e2e-network-policy-9382" for this suite. 06/20/23 09:44:47.364 fail [k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941]: Jun 20 09:44:47.232: Pod client-a-8s68c should be able to connect to service svc-server, but was not able to connect. Pod logs: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Current NetworkPolicies: [{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-9382 1a16a246-cc95-4d4c-b1de-0ab7cf7223a3 40807 1 2023-06-20 09:43:56 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-20 09:43:56 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.8.222/32,Except:[],}}]}] [Egress]} {[]}} {{ } {deny-client-a-via-except-cidr-egress-rule e2e-network-policy-9382 767db56d-4453-4fc3-8e2b-092543b8602d 39343 1 2023-06-20 09:43:06 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-20 09:43:06 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.8.0/24,Except:[10.128.8.222/32],}}]}] [Egress]} {[]}}] Pods: [Pod: client-a-8s68c, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:43:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:43 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:44:43 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:43:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.8.24,StartTime:2023-06-20 09:43:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-06-20 09:43:57 +0000 UTC,FinishedAt:2023-06-20 09:44:43 +0000 UTC,ContainerID:cri-o://ef65a57d2b3421e1de08b9c05d5e52eefc2893f23fca8a02139686da3c700e1b,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://ef65a57d2b3421e1de08b9c05d5e52eefc2893f23fca8a02139686da3c700e1b,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.24,},PodIP{IP:fd00::4a3,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-qvbwf, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:42:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:42:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:42:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:42:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.8.222,StartTime:2023-06-20 09:42:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-20 09:42:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://945572033aa21e456d8816e3842fc529f9cfe82bfd6696116f83e9850b29c2f5,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-20 09:42:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://5b863fd6602918f601a6e1c732b4ba1e4e8ada7afed42377f928f2d096b15409,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.222,},PodIP{IP:fd00::405,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Ginkgo exit error 1: exit with code 1 failed: (1m56s) 2023-06-20T09:44:47 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 2/40/67 "[sig-network] DNS should provide DNS for pods for Subdomain [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (22.5s) 2023-06-20T09:44:54 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 2/41/67 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (2m6s) 2023-06-20T09:44:54 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 2/42/67 "[sig-network] DNS should support configurable pod DNS nameservers [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (3.6s) 2023-06-20T09:44:58 "[sig-network] DNS should support configurable pod DNS nameservers [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/43/67 "[sig-network] DNS should provide DNS for pods for Hostname [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (2m3s) 2023-06-20T09:44:59 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 2/44/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (5.4s) 2023-06-20T09:45:03 "[sig-network] DNS should provide DNS for pods for Hostname [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/45/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (18.6s) 2023-06-20T09:45:05 "[sig-network] DNS should provide DNS for pods for Subdomain [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/46/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (45.2s) 2023-06-20T09:45:22 "[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/47/67 "[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m8s) 2023-06-20T09:45:22 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 2/48/67 "[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (31.1s) 2023-06-20T09:45:25 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/49/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m10s) 2023-06-20T09:45:25 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 2/50/67 "[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (8.6s) 2023-06-20T09:45:31 "[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/51/67 "[sig-network] Services should be able to create a functioning NodePort service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (9.3s) 2023-06-20T09:45:35 "[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/52/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (13.5s) 2023-06-20T09:45:35 "[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/53/67 "[sig-network] Services should delete a collection of services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.5s) 2023-06-20T09:45:37 "[sig-network] Services should delete a collection of services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/54/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (8.8s) 2023-06-20T09:45:39 "[sig-network] Services should be able to create a functioning NodePort service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/55/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m8s) 2023-06-20T09:46:02 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 2/56/67 "[sig-network] DNS should provide DNS for services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m8s) 2023-06-20T09:46:07 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 2/57/67 "[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (13.7s) 2023-06-20T09:46:16 "[sig-network] DNS should provide DNS for services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/58/67 "[sig-network] DNS should provide DNS for the cluster [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (11.3s) 2023-06-20T09:46:19 "[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/59/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m38s) 2023-06-20T09:46:20 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 2/60/67 "[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (5.4s) 2023-06-20T09:46:21 "[sig-network] DNS should provide DNS for the cluster [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/61/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (11s) 2023-06-20T09:46:31 "[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/62/67 "[sig-network] Services should serve multiport endpoints from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (3m10s) 2023-06-20T09:46:32 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 2/63/67 "[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m8s) 2023-06-20T09:46:32 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 2/64/67 "[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.3s) 2023-06-20T09:46:33 "[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 2/65/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m33s) 2023-06-20T09:46:38 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 2/66/67 "[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (11.6s) 2023-06-20T09:46:44 "[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (13.9s) 2023-06-20T09:46:45 "[sig-network] Services should serve multiport endpoints from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (13.9s) 2023-06-20T09:46:52 "[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (29.8s) 2023-06-20T09:47:03 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m19s) 2023-06-20T09:47:22 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m2s) 2023-06-20T09:47:37 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m16s) 2023-06-20T09:47:37 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m0s) 2023-06-20T09:47:39 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" Jun 20 09:46:19.207: INFO: Enabling in-tree volume drivers [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/test.go:58 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] set up framework | framework.go:178 STEP: Creating a kubernetes client 06/20/23 09:46:19.961 STEP: Building a namespace api object, basename network-policy 06/20/23 09:46:19.962 Jun 20 09:46:20.022: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace 06/20/23 09:46:20.232 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/20/23 09:46:20.238 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:72 [BeforeEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:78 STEP: Creating a simple server that serves on port 80 and 81. 06/20/23 09:46:20.244 STEP: Creating a server pod server in namespace e2e-network-policy-4162 06/20/23 09:46:20.244 W0620 09:46:20.280859 3898 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "server-container-80", "server-container-81" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "server-container-80", "server-container-81" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "server-container-80", "server-container-81" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "server-container-80", "server-container-81" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:46:20.280: INFO: Created pod server-z9hdt STEP: Creating a service svc-server for pod server in namespace e2e-network-policy-4162 06/20/23 09:46:20.28 Jun 20 09:46:20.327: INFO: Created service svc-server STEP: Waiting for pod ready 06/20/23 09:46:20.327 Jun 20 09:46:20.327: INFO: Waiting up to 5m0s for pod "server-z9hdt" in namespace "e2e-network-policy-4162" to be "running and ready" Jun 20 09:46:20.335: INFO: Pod "server-z9hdt": Phase="Pending", Reason="", readiness=false. Elapsed: 7.362756ms Jun 20 09:46:20.335: INFO: The phase of Pod server-z9hdt is Pending, waiting for it to be Running (with Ready = true) Jun 20 09:46:22.342: INFO: Pod "server-z9hdt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014363345s Jun 20 09:46:22.342: INFO: The phase of Pod server-z9hdt is Pending, waiting for it to be Running (with Ready = true) Jun 20 09:46:24.342: INFO: Pod "server-z9hdt": Phase="Running", Reason="", readiness=true. Elapsed: 4.014781917s Jun 20 09:46:24.342: INFO: The phase of Pod server-z9hdt is Running (Ready = true) Jun 20 09:46:24.342: INFO: Pod "server-z9hdt" satisfied condition "running and ready" STEP: Testing pods can connect to both ports when no policy is present. 06/20/23 09:46:24.342 STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. 06/20/23 09:46:24.342 W0620 09:46:24.353184 3898 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:46:24.353: INFO: Waiting for client-can-connect-80-txk6h to complete. Jun 20 09:46:24.353: INFO: Waiting up to 3m0s for pod "client-can-connect-80-txk6h" in namespace "e2e-network-policy-4162" to be "completed" Jun 20 09:46:24.360: INFO: Pod "client-can-connect-80-txk6h": Phase="Pending", Reason="", readiness=false. Elapsed: 7.069063ms Jun 20 09:46:26.369: INFO: Pod "client-can-connect-80-txk6h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01623131s Jun 20 09:46:28.366: INFO: Pod "client-can-connect-80-txk6h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013354463s Jun 20 09:46:28.366: INFO: Pod "client-can-connect-80-txk6h" satisfied condition "completed" Jun 20 09:46:28.366: INFO: Waiting for client-can-connect-80-txk6h to complete. Jun 20 09:46:28.366: INFO: Waiting up to 5m0s for pod "client-can-connect-80-txk6h" in namespace "e2e-network-policy-4162" to be "Succeeded or Failed" Jun 20 09:46:28.376: INFO: Pod "client-can-connect-80-txk6h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.739368ms STEP: Saw pod success 06/20/23 09:46:28.376 Jun 20 09:46:28.376: INFO: Pod "client-can-connect-80-txk6h" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-80-txk6h 06/20/23 09:46:28.376 STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. 06/20/23 09:46:28.4 W0620 09:46:28.410422 3898 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:46:28.410: INFO: Waiting for client-can-connect-81-2tcx9 to complete. Jun 20 09:46:28.410: INFO: Waiting up to 3m0s for pod "client-can-connect-81-2tcx9" in namespace "e2e-network-policy-4162" to be "completed" Jun 20 09:46:28.428: INFO: Pod "client-can-connect-81-2tcx9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.763569ms Jun 20 09:46:30.434: INFO: Pod "client-can-connect-81-2tcx9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024083515s Jun 20 09:46:32.440: INFO: Pod "client-can-connect-81-2tcx9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029582082s Jun 20 09:46:32.440: INFO: Pod "client-can-connect-81-2tcx9" satisfied condition "completed" Jun 20 09:46:32.440: INFO: Waiting for client-can-connect-81-2tcx9 to complete. Jun 20 09:46:32.440: INFO: Waiting up to 5m0s for pod "client-can-connect-81-2tcx9" in namespace "e2e-network-policy-4162" to be "Succeeded or Failed" Jun 20 09:46:32.451: INFO: Pod "client-can-connect-81-2tcx9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.396199ms STEP: Saw pod success 06/20/23 09:46:32.451 Jun 20 09:46:32.451: INFO: Pod "client-can-connect-81-2tcx9" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-81-2tcx9 06/20/23 09:46:32.451 [It] should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1343 STEP: Creating a server pod pod-b in namespace e2e-network-policy-4162 06/20/23 09:46:32.503 W0620 09:46:32.515531 3898 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-b-container-80" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-b-container-80" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-b-container-80" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-b-container-80" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:46:32.515: INFO: Created pod pod-b-hrmmv STEP: Creating a service svc-pod-b for pod pod-b in namespace e2e-network-policy-4162 06/20/23 09:46:32.515 Jun 20 09:46:32.564: INFO: Created service svc-pod-b STEP: Waiting for pod-b to be ready 06/20/23 09:46:32.564 Jun 20 09:46:32.564: INFO: Waiting up to 5m0s for pod "pod-b-hrmmv" in namespace "e2e-network-policy-4162" to be "running and ready" Jun 20 09:46:32.581: INFO: Pod "pod-b-hrmmv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.856541ms Jun 20 09:46:32.581: INFO: The phase of Pod pod-b-hrmmv is Pending, waiting for it to be Running (with Ready = true) Jun 20 09:46:34.588: INFO: Pod "pod-b-hrmmv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023410644s Jun 20 09:46:34.588: INFO: The phase of Pod pod-b-hrmmv is Pending, waiting for it to be Running (with Ready = true) Jun 20 09:46:36.587: INFO: Pod "pod-b-hrmmv": Phase="Running", Reason="", readiness=true. Elapsed: 4.022858246s Jun 20 09:46:36.587: INFO: The phase of Pod pod-b-hrmmv is Running (Ready = true) Jun 20 09:46:36.587: INFO: Pod "pod-b-hrmmv" satisfied condition "running and ready" Jun 20 09:46:36.587: INFO: Waiting up to 5m0s for pod "pod-b-hrmmv" in namespace "e2e-network-policy-4162" to be "running" Jun 20 09:46:36.594: INFO: Pod "pod-b-hrmmv": Phase="Running", Reason="", readiness=true. Elapsed: 6.432058ms Jun 20 09:46:36.594: INFO: Pod "pod-b-hrmmv" satisfied condition "running" STEP: Creating client-a which should be able to contact the server-b. 06/20/23 09:46:36.594 STEP: Creating client pod client-a that should successfully connect to svc-pod-b. 06/20/23 09:46:36.594 W0620 09:46:36.603627 3898 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:46:36.603: INFO: Waiting for client-a-9n8nv to complete. Jun 20 09:46:36.603: INFO: Waiting up to 3m0s for pod "client-a-9n8nv" in namespace "e2e-network-policy-4162" to be "completed" Jun 20 09:46:36.611: INFO: Pod "client-a-9n8nv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.229026ms Jun 20 09:46:38.620: INFO: Pod "client-a-9n8nv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016611749s Jun 20 09:46:40.620: INFO: Pod "client-a-9n8nv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017003635s Jun 20 09:46:42.619: INFO: Pod "client-a-9n8nv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015591841s Jun 20 09:46:42.619: INFO: Pod "client-a-9n8nv" satisfied condition "completed" Jun 20 09:46:42.619: INFO: Waiting for client-a-9n8nv to complete. Jun 20 09:46:42.619: INFO: Waiting up to 5m0s for pod "client-a-9n8nv" in namespace "e2e-network-policy-4162" to be "Succeeded or Failed" Jun 20 09:46:42.624: INFO: Pod "client-a-9n8nv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.890712ms STEP: Saw pod success 06/20/23 09:46:42.624 Jun 20 09:46:42.624: INFO: Pod "client-a-9n8nv" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-a-9n8nv 06/20/23 09:46:42.624 STEP: Creating client-a which should not be able to contact the server-b. 06/20/23 09:46:42.654 STEP: Creating client pod client-a that should not be able to connect to svc-pod-b. 06/20/23 09:46:42.654 W0620 09:46:42.666878 3898 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:46:42.667: INFO: Waiting for client-a-s92nn to complete. Jun 20 09:46:42.667: INFO: Waiting up to 5m0s for pod "client-a-s92nn" in namespace "e2e-network-policy-4162" to be "Succeeded or Failed" Jun 20 09:46:42.672: INFO: Pod "client-a-s92nn": Phase="Pending", Reason="", readiness=false. Elapsed: 5.456958ms Jun 20 09:46:44.680: INFO: Pod "client-a-s92nn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013708866s Jun 20 09:46:46.683: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 4.016462111s Jun 20 09:46:48.680: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 6.013873355s Jun 20 09:46:50.681: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 8.014073491s Jun 20 09:46:52.679: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 10.01231814s Jun 20 09:46:54.679: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 12.012913398s Jun 20 09:46:56.683: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 14.016160586s Jun 20 09:46:58.679: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 16.012358167s Jun 20 09:47:00.682: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 18.015281794s Jun 20 09:47:02.679: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 20.012368357s Jun 20 09:47:04.678: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 22.011345482s Jun 20 09:47:06.678: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 24.011574982s Jun 20 09:47:08.677: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 26.010814638s Jun 20 09:47:10.681: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 28.014009791s Jun 20 09:47:12.679: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 30.012311339s Jun 20 09:47:14.679: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 32.012313651s Jun 20 09:47:16.678: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 34.01156237s Jun 20 09:47:18.679: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 36.012130084s Jun 20 09:47:20.678: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 38.011835933s Jun 20 09:47:22.687: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 40.020331766s Jun 20 09:47:24.679: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 42.012019035s Jun 20 09:47:26.680: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 44.013156156s Jun 20 09:47:28.679: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=true. Elapsed: 46.011969044s Jun 20 09:47:30.679: INFO: Pod "client-a-s92nn": Phase="Running", Reason="", readiness=false. Elapsed: 48.012490645s Jun 20 09:47:32.679: INFO: Pod "client-a-s92nn": Phase="Failed", Reason="", readiness=false. Elapsed: 50.011946397s STEP: Cleaning up the pod client-a-s92nn 06/20/23 09:47:32.679 STEP: Creating client-a which should be able to contact the server. 06/20/23 09:47:32.702 STEP: Creating client pod client-a that should successfully connect to svc-server. 06/20/23 09:47:32.702 W0620 09:47:32.722439 3898 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 20 09:47:32.722: INFO: Waiting for client-a-8r566 to complete. Jun 20 09:47:32.722: INFO: Waiting up to 3m0s for pod "client-a-8r566" in namespace "e2e-network-policy-4162" to be "completed" Jun 20 09:47:32.728: INFO: Pod "client-a-8r566": Phase="Pending", Reason="", readiness=false. Elapsed: 6.284568ms Jun 20 09:47:34.733: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 2.011324042s Jun 20 09:47:36.736: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 4.01399248s Jun 20 09:47:38.741: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 6.018631041s Jun 20 09:47:40.738: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 8.016330614s Jun 20 09:47:42.738: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 10.01599119s Jun 20 09:47:44.734: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 12.011839974s Jun 20 09:47:46.736: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 14.013750703s Jun 20 09:47:48.734: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 16.012096618s Jun 20 09:47:50.739: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 18.016855456s Jun 20 09:47:52.735: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 20.012981815s Jun 20 09:47:54.735: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 22.013276086s Jun 20 09:47:56.736: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 24.013801664s Jun 20 09:47:58.735: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 26.012628619s Jun 20 09:48:00.740: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 28.017448367s Jun 20 09:48:02.735: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 30.012761611s Jun 20 09:48:04.734: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 32.011810667s Jun 20 09:48:06.735: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 34.0131196s Jun 20 09:48:08.735: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 36.01318888s Jun 20 09:48:10.735: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 38.012482579s Jun 20 09:48:12.735: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 40.013152513s Jun 20 09:48:14.734: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 42.012245276s Jun 20 09:48:16.735: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 44.013222009s Jun 20 09:48:18.737: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=true. Elapsed: 46.014971376s Jun 20 09:48:20.737: INFO: Pod "client-a-8r566": Phase="Running", Reason="", readiness=false. Elapsed: 48.015014136s Jun 20 09:48:22.736: INFO: Pod "client-a-8r566": Phase="Failed", Reason="", readiness=false. Elapsed: 50.013558897s Jun 20 09:48:22.736: INFO: Pod "client-a-8r566" satisfied condition "completed" Jun 20 09:48:22.736: INFO: Waiting for client-a-8r566 to complete. Jun 20 09:48:22.736: INFO: Waiting up to 5m0s for pod "client-a-8r566" in namespace "e2e-network-policy-4162" to be "Succeeded or Failed" Jun 20 09:48:22.740: INFO: Pod "client-a-8r566": Phase="Failed", Reason="", readiness=false. Elapsed: 4.380444ms Jun 20 09:48:22.745: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-4162 describe po client-a-8r566' Jun 20 09:48:22.893: INFO: stderr: "" Jun 20 09:48:22.893: INFO: stdout: "Name: client-a-8r566\nNamespace: e2e-network-policy-4162\nPriority: 0\nService Account: default\nNode: worker02/192.168.200.32\nStart Time: Tue, 20 Jun 2023 09:47:32 +0000\nLabels: pod-name=client-a\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::54a\",\n \"10.128.10.140\"\n ],\n \"mac\": \"f2:cf:bf:78:b4:93\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::54a\",\n \"10.128.10.140\"\n ],\n \"mac\": \"f2:cf:bf:78:b4:93\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Failed\nIP: 10.128.10.140\nIPs:\n IP: 10.128.10.140\n IP: fd00::54a\nContainers:\n client:\n Container ID: cri-o://84754807bda715e063a68b82f630473d0fd8755f20c2fb1591386212f6aa7f2a\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: \n Host Port: \n Command:\n /bin/sh\n Args:\n -c\n for i in $(seq 1 5); do /agnhost connect 172.30.39.150:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1\n State: Terminated\n Reason: Error\n Exit Code: 1\n Started: Tue, 20 Jun 2023 09:47:33 +0000\n Finished: Tue, 20 Jun 2023 09:48:19 +0000\n Ready: False\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pcqwt (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-pcqwt:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-4162/client-a-8r566 to worker02 by cp02\n Normal AddedInterface 49s multus Add eth0 [fd00::54a/128 10.128.10.140/32] from cilium\n Normal Pulled 49s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 49s kubelet Created container client\n Normal Started 49s kubelet Started container client\n" Jun 20 09:48:22.893: INFO: Output of kubectl describe client-a-8r566: Name: client-a-8r566 Namespace: e2e-network-policy-4162 Priority: 0 Service Account: default Node: worker02/192.168.200.32 Start Time: Tue, 20 Jun 2023 09:47:32 +0000 Labels: pod-name=client-a Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::54a", "10.128.10.140" ], "mac": "f2:cf:bf:78:b4:93", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::54a", "10.128.10.140" ], "mac": "f2:cf:bf:78:b4:93", "default": true, "dns": {} }] Status: Failed IP: 10.128.10.140 IPs: IP: 10.128.10.140 IP: fd00::54a Containers: client: Container ID: cri-o://84754807bda715e063a68b82f630473d0fd8755f20c2fb1591386212f6aa7f2a Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: Host Port: Command: /bin/sh Args: -c for i in $(seq 1 5); do /agnhost connect 172.30.39.150:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1 State: Terminated Reason: Error Exit Code: 1 Started: Tue, 20 Jun 2023 09:47:33 +0000 Finished: Tue, 20 Jun 2023 09:48:19 +0000 Ready: False Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pcqwt (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-pcqwt: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-4162/client-a-8r566 to worker02 by cp02 Normal AddedInterface 49s multus Add eth0 [fd00::54a/128 10.128.10.140/32] from cilium Normal Pulled 49s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 49s kubelet Created container client Normal Started 49s kubelet Started container client Jun 20 09:48:22.893: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-4162 logs client-a-8r566 --tail=100' Jun 20 09:48:23.056: INFO: stderr: "" Jun 20 09:48:23.056: INFO: stdout: "TIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\n" Jun 20 09:48:23.056: INFO: Last 100 log lines of client-a-8r566: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Jun 20 09:48:23.056: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-4162 describe po pod-b-hrmmv' Jun 20 09:48:23.198: INFO: stderr: "" Jun 20 09:48:23.198: INFO: stdout: "Name: pod-b-hrmmv\nNamespace: e2e-network-policy-4162\nPriority: 0\nService Account: default\nNode: worker01/192.168.200.31\nStart Time: Tue, 20 Jun 2023 09:46:32 +0000\nLabels: pod-name=pod-b\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::32f\",\n \"10.128.7.201\"\n ],\n \"mac\": \"e2:13:5c:71:f8:71\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::32f\",\n \"10.128.7.201\"\n ],\n \"mac\": \"e2:13:5c:71:f8:71\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.7.201\nIPs:\n IP: 10.128.7.201\n IP: fd00::32f\nContainers:\n pod-b-container-80:\n Container ID: cri-o://499da937195ab905cdfd843b0acc73ca73b0acd91ee79e30ef87a1489404b9f2\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Tue, 20 Jun 2023 09:46:33 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vd6td (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-vd6td:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 110s default-scheduler Successfully assigned e2e-network-policy-4162/pod-b-hrmmv to worker01 by cp02\n Normal AddedInterface 110s multus Add eth0 [fd00::32f/128 10.128.7.201/32] from cilium\n Normal Pulled 110s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 110s kubelet Created container pod-b-container-80\n Normal Started 110s kubelet Started container pod-b-container-80\n" Jun 20 09:48:23.198: INFO: Output of kubectl describe pod-b-hrmmv: Name: pod-b-hrmmv Namespace: e2e-network-policy-4162 Priority: 0 Service Account: default Node: worker01/192.168.200.31 Start Time: Tue, 20 Jun 2023 09:46:32 +0000 Labels: pod-name=pod-b Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::32f", "10.128.7.201" ], "mac": "e2:13:5c:71:f8:71", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::32f", "10.128.7.201" ], "mac": "e2:13:5c:71:f8:71", "default": true, "dns": {} }] Status: Running IP: 10.128.7.201 IPs: IP: 10.128.7.201 IP: fd00::32f Containers: pod-b-container-80: Container ID: cri-o://499da937195ab905cdfd843b0acc73ca73b0acd91ee79e30ef87a1489404b9f2 Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 80/TCP Host Port: 0/TCP Args: porter State: Running Started: Tue, 20 Jun 2023 09:46:33 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_80: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vd6td (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-vd6td: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 110s default-scheduler Successfully assigned e2e-network-policy-4162/pod-b-hrmmv to worker01 by cp02 Normal AddedInterface 110s multus Add eth0 [fd00::32f/128 10.128.7.201/32] from cilium Normal Pulled 110s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 110s kubelet Created container pod-b-container-80 Normal Started 110s kubelet Started container pod-b-container-80 Jun 20 09:48:23.198: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-4162 logs pod-b-hrmmv --tail=100' Jun 20 09:48:23.332: INFO: stderr: "" Jun 20 09:48:23.332: INFO: stdout: "" Jun 20 09:48:23.332: INFO: Last 100 log lines of pod-b-hrmmv: Jun 20 09:48:23.332: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-4162 describe po server-z9hdt' Jun 20 09:48:23.466: INFO: stderr: "" Jun 20 09:48:23.466: INFO: stdout: "Name: server-z9hdt\nNamespace: e2e-network-policy-4162\nPriority: 0\nService Account: default\nNode: worker01/192.168.200.31\nStart Time: Tue, 20 Jun 2023 09:46:20 +0000\nLabels: pod-name=server\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::381\",\n \"10.128.6.191\"\n ],\n \"mac\": \"c6:76:f8:f4:87:98\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::381\",\n \"10.128.6.191\"\n ],\n \"mac\": \"c6:76:f8:f4:87:98\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.6.191\nIPs:\n IP: 10.128.6.191\n IP: fd00::381\nContainers:\n server-container-80:\n Container ID: cri-o://7f145f0a9893e294a37c170c83ae3b2dc7ae9df72b5adc2befdc97d728941755\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Tue, 20 Jun 2023 09:46:21 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pxh72 (ro)\n server-container-81:\n Container ID: cri-o://92c3d258244d882161da77f1bbf5f5cf2bf15ed7fcb9deb3fdaf533cc8771956\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 81/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Tue, 20 Jun 2023 09:46:22 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_81: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pxh72 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-pxh72:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2m3s default-scheduler Successfully assigned e2e-network-policy-4162/server-z9hdt to worker01 by cp02\n Normal AddedInterface 2m2s multus Add eth0 [fd00::381/128 10.128.6.191/32] from cilium\n Normal Pulled 2m2s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 2m2s kubelet Created container server-container-80\n Normal Started 2m2s kubelet Started container server-container-80\n Normal Pulled 2m2s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 2m1s kubelet Created container server-container-81\n Normal Started 2m1s kubelet Started container server-container-81\n" Jun 20 09:48:23.466: INFO: Output of kubectl describe server-z9hdt: Name: server-z9hdt Namespace: e2e-network-policy-4162 Priority: 0 Service Account: default Node: worker01/192.168.200.31 Start Time: Tue, 20 Jun 2023 09:46:20 +0000 Labels: pod-name=server Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::381", "10.128.6.191" ], "mac": "c6:76:f8:f4:87:98", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::381", "10.128.6.191" ], "mac": "c6:76:f8:f4:87:98", "default": true, "dns": {} }] Status: Running IP: 10.128.6.191 IPs: IP: 10.128.6.191 IP: fd00::381 Containers: server-container-80: Container ID: cri-o://7f145f0a9893e294a37c170c83ae3b2dc7ae9df72b5adc2befdc97d728941755 Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 80/TCP Host Port: 0/TCP Args: porter State: Running Started: Tue, 20 Jun 2023 09:46:21 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_80: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pxh72 (ro) server-container-81: Container ID: cri-o://92c3d258244d882161da77f1bbf5f5cf2bf15ed7fcb9deb3fdaf533cc8771956 Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 81/TCP Host Port: 0/TCP Args: porter State: Running Started: Tue, 20 Jun 2023 09:46:22 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_81: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pxh72 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-pxh72: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m3s default-scheduler Successfully assigned e2e-network-policy-4162/server-z9hdt to worker01 by cp02 Normal AddedInterface 2m2s multus Add eth0 [fd00::381/128 10.128.6.191/32] from cilium Normal Pulled 2m2s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 2m2s kubelet Created container server-container-80 Normal Started 2m2s kubelet Started container server-container-80 Normal Pulled 2m2s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 2m1s kubelet Created container server-container-81 Normal Started 2m1s kubelet Started container server-container-81 Jun 20 09:48:23.466: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-4162 logs server-z9hdt --tail=100' Jun 20 09:48:23.612: INFO: stderr: "Defaulted container \"server-container-80\" out of: server-container-80, server-container-81\n" Jun 20 09:48:23.612: INFO: stdout: "" Jun 20 09:48:23.612: INFO: Last 100 log lines of server-z9hdt: Jun 20 09:48:23.635: FAIL: Pod client-a-8r566 should be able to connect to service svc-server, but was not able to connect. Pod logs: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Current NetworkPolicies: [{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-4162 ed25da6c-8671-428a-a7b8-86084b9d8688 48525 1 2023-06-20 09:46:42 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-20 09:46:42 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.6.191/32,Except:[],}}]}] [Egress]} {[]}}] Pods: [Pod: client-a-8r566, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:47:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:48:19 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:48:19 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:47:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.10.140,StartTime:2023-06-20 09:47:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-06-20 09:47:33 +0000 UTC,FinishedAt:2023-06-20 09:48:19 +0000 UTC,ContainerID:cri-o://84754807bda715e063a68b82f630473d0fd8755f20c2fb1591386212f6aa7f2a,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://84754807bda715e063a68b82f630473d0fd8755f20c2fb1591386212f6aa7f2a,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.10.140,},PodIP{IP:fd00::54a,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: pod-b-hrmmv, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.7.201,StartTime:2023-06-20 09:46:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pod-b-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-20 09:46:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://499da937195ab905cdfd843b0acc73ca73b0acd91ee79e30ef87a1489404b9f2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.7.201,},PodIP{IP:fd00::32f,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-z9hdt, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.6.191,StartTime:2023-06-20 09:46:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-20 09:46:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://7f145f0a9893e294a37c170c83ae3b2dc7ae9df72b5adc2befdc97d728941755,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-20 09:46:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://92c3d258244d882161da77f1bbf5f5cf2bf15ed7fcb9deb3fdaf533cc8771956,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.6.191,},PodIP{IP:fd00::381,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Full Stack Trace k8s.io/kubernetes/test/e2e/network/netpol.checkConnectivity(0xc000e3d950, 0xc0016f09a0, 0xc006e0f200, 0xc0068d4c80) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941 +0x355 k8s.io/kubernetes/test/e2e/network/netpol.testCanConnectProtocol(0xc000e3d950, 0xc0016f09a0, {0x8a33123, 0x8}, 0xc0068d4c80, 0xc001d62c70?, {0x8a24aec, 0x3}) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1913 +0x1be k8s.io/kubernetes/test/e2e/network/netpol.testCanConnect(...) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1897 k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.27.4() k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1410 +0x47 github.com/onsi/ginkgo/v2.By({0x8c00310, 0x3d}, {0xc0066a7e50, 0x1, 0x0?}) github.com/onsi/ginkgo/v2@v2.4.0/core_dsl.go:535 +0x525 k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.27() k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1409 +0x8fc github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2e8b77e, 0xc000dc2900}) github.com/onsi/ginkgo/v2@v2.4.0/internal/node.go:449 +0x1b github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:757 +0x98 created by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:745 +0xe3d STEP: Cleaning up the pod client-a-8r566 06/20/23 09:48:23.635 STEP: Cleaning up the policy. 06/20/23 09:48:23.654 STEP: Cleaning up the server. 06/20/23 09:48:23.665 STEP: Cleaning up the server's service. 06/20/23 09:48:23.681 [AfterEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:96 STEP: Cleaning up the server. 06/20/23 09:48:23.732 STEP: Cleaning up the server's service. 06/20/23 09:48:23.754 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] dump namespaces | framework.go:196 STEP: dump namespace information after failure 06/20/23 09:48:23.811 STEP: Collecting events from namespace "e2e-network-policy-4162". 06/20/23 09:48:23.811 STEP: Found 41 events. 06/20/23 09:48:23.817 Jun 20 09:48:23.817: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-8r566: { } Scheduled: Successfully assigned e2e-network-policy-4162/client-a-8r566 to worker02 by cp02 Jun 20 09:48:23.817: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-9n8nv: { } Scheduled: Successfully assigned e2e-network-policy-4162/client-a-9n8nv to worker01 by cp02 Jun 20 09:48:23.817: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-s92nn: { } Scheduled: Successfully assigned e2e-network-policy-4162/client-a-s92nn to worker03 by cp02 Jun 20 09:48:23.818: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-txk6h: { } Scheduled: Successfully assigned e2e-network-policy-4162/client-can-connect-80-txk6h to worker03 by cp02 Jun 20 09:48:23.818: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-2tcx9: { } Scheduled: Successfully assigned e2e-network-policy-4162/client-can-connect-81-2tcx9 to worker03 by cp02 Jun 20 09:48:23.818: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-b-hrmmv: { } Scheduled: Successfully assigned e2e-network-policy-4162/pod-b-hrmmv to worker01 by cp02 Jun 20 09:48:23.818: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-z9hdt: { } Scheduled: Successfully assigned e2e-network-policy-4162/server-z9hdt to worker01 by cp02 Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:21 +0000 UTC - event for server-z9hdt: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:21 +0000 UTC - event for server-z9hdt: {kubelet worker01} Created: Created container server-container-80 Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:21 +0000 UTC - event for server-z9hdt: {kubelet worker01} Started: Started container server-container-80 Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:21 +0000 UTC - event for server-z9hdt: {multus } AddedInterface: Add eth0 [fd00::381/128 10.128.6.191/32] from cilium Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:21 +0000 UTC - event for server-z9hdt: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:22 +0000 UTC - event for server-z9hdt: {kubelet worker01} Created: Created container server-container-81 Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:22 +0000 UTC - event for server-z9hdt: {kubelet worker01} Started: Started container server-container-81 Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:25 +0000 UTC - event for client-can-connect-80-txk6h: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:25 +0000 UTC - event for client-can-connect-80-txk6h: {multus } AddedInterface: Add eth0 [fd00::461/128 10.128.8.172/32] from cilium Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:25 +0000 UTC - event for client-can-connect-80-txk6h: {kubelet worker03} Created: Created container client Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:25 +0000 UTC - event for client-can-connect-80-txk6h: {kubelet worker03} Started: Started container client Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:29 +0000 UTC - event for client-can-connect-81-2tcx9: {kubelet worker03} Started: Started container client Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:29 +0000 UTC - event for client-can-connect-81-2tcx9: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:29 +0000 UTC - event for client-can-connect-81-2tcx9: {kubelet worker03} Created: Created container client Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:29 +0000 UTC - event for client-can-connect-81-2tcx9: {multus } AddedInterface: Add eth0 [fd00::408/128 10.128.9.13/32] from cilium Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:33 +0000 UTC - event for pod-b-hrmmv: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:33 +0000 UTC - event for pod-b-hrmmv: {multus } AddedInterface: Add eth0 [fd00::32f/128 10.128.7.201/32] from cilium Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:33 +0000 UTC - event for pod-b-hrmmv: {kubelet worker01} Created: Created container pod-b-container-80 Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:33 +0000 UTC - event for pod-b-hrmmv: {kubelet worker01} Started: Started container pod-b-container-80 Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:37 +0000 UTC - event for client-a-9n8nv: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:37 +0000 UTC - event for client-a-9n8nv: {multus } AddedInterface: Add eth0 [fd00::37a/128 10.128.7.197/32] from cilium Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:38 +0000 UTC - event for client-a-9n8nv: {kubelet worker01} Started: Started container client Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:38 +0000 UTC - event for client-a-9n8nv: {kubelet worker01} Created: Created container client Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:43 +0000 UTC - event for client-a-s92nn: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:43 +0000 UTC - event for client-a-s92nn: {multus } AddedInterface: Add eth0 [fd00::447/128 10.128.8.5/32] from cilium Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:44 +0000 UTC - event for client-a-s92nn: {kubelet worker03} Started: Started container client Jun 20 09:48:23.818: INFO: At 2023-06-20 09:46:44 +0000 UTC - event for client-a-s92nn: {kubelet worker03} Created: Created container client Jun 20 09:48:23.818: INFO: At 2023-06-20 09:47:33 +0000 UTC - event for client-a-8r566: {multus } AddedInterface: Add eth0 [fd00::54a/128 10.128.10.140/32] from cilium Jun 20 09:48:23.818: INFO: At 2023-06-20 09:47:33 +0000 UTC - event for client-a-8r566: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 20 09:48:23.818: INFO: At 2023-06-20 09:47:33 +0000 UTC - event for client-a-8r566: {kubelet worker02} Started: Started container client Jun 20 09:48:23.818: INFO: At 2023-06-20 09:47:33 +0000 UTC - event for client-a-8r566: {kubelet worker02} Created: Created container client Jun 20 09:48:23.818: INFO: At 2023-06-20 09:48:23 +0000 UTC - event for pod-b-hrmmv: {kubelet worker01} Killing: Stopping container pod-b-container-80 Jun 20 09:48:23.818: INFO: At 2023-06-20 09:48:23 +0000 UTC - event for server-z9hdt: {kubelet worker01} Killing: Stopping container server-container-80 Jun 20 09:48:23.818: INFO: At 2023-06-20 09:48:23 +0000 UTC - event for server-z9hdt: {kubelet worker01} Killing: Stopping container server-container-81 Jun 20 09:48:23.823: INFO: POD NODE PHASE GRACE CONDITIONS Jun 20 09:48:23.823: INFO: pod-b-hrmmv worker01 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:46:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:46:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:46:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:46:32 +0000 UTC }] Jun 20 09:48:23.823: INFO: server-z9hdt worker01 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:46:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:46:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:46:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-20 09:46:20 +0000 UTC }] Jun 20 09:48:23.823: INFO: Jun 20 09:48:23.832: INFO: skipping dumping cluster info - cluster too large [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] tear down framework | framework.go:193 STEP: Destroying namespace "e2e-network-policy-4162" for this suite. 06/20/23 09:48:23.832 fail [k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941]: Jun 20 09:48:23.635: Pod client-a-8r566 should be able to connect to service svc-server, but was not able to connect. Pod logs: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Current NetworkPolicies: [{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-4162 ed25da6c-8671-428a-a7b8-86084b9d8688 48525 1 2023-06-20 09:46:42 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-20 09:46:42 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.6.191/32,Except:[],}}]}] [Egress]} {[]}}] Pods: [Pod: client-a-8r566, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:47:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:48:19 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:48:19 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:47:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.10.140,StartTime:2023-06-20 09:47:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-06-20 09:47:33 +0000 UTC,FinishedAt:2023-06-20 09:48:19 +0000 UTC,ContainerID:cri-o://84754807bda715e063a68b82f630473d0fd8755f20c2fb1591386212f6aa7f2a,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://84754807bda715e063a68b82f630473d0fd8755f20c2fb1591386212f6aa7f2a,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.10.140,},PodIP{IP:fd00::54a,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: pod-b-hrmmv, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.7.201,StartTime:2023-06-20 09:46:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pod-b-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-20 09:46:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://499da937195ab905cdfd843b0acc73ca73b0acd91ee79e30ef87a1489404b9f2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.7.201,},PodIP{IP:fd00::32f,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-z9hdt, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-20 09:46:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.6.191,StartTime:2023-06-20 09:46:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-20 09:46:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://7f145f0a9893e294a37c170c83ae3b2dc7ae9df72b5adc2befdc97d728941755,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-20 09:46:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://92c3d258244d882161da77f1bbf5f5cf2bf15ed7fcb9deb3fdaf533cc8771956,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.6.191,},PodIP{IP:fd00::381,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Ginkgo exit error 1: exit with code 1 failed: (2m5s) 2023-06-20T09:48:23 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (3m48s) 2023-06-20T09:49:25 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 3/67/67 "[sig-network] Service endpoints latency should not be very high [Conformance] [Serial] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]" passed: (12s) 2023-06-20T09:49:37 "[sig-network] Service endpoints latency should not be very high [Conformance] [Serial] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]" Failing tests: [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s] error: 3 fail, 64 pass, 0 skip (7m25s) ```

Let's merge this.