Using Jarno's PR and the setup he deployed we could observe the two expected failures, all other tests passed:
Collection of node logs and analysis took: 614.453294ms
I1019 15:37:41.543572 1 request.go:690] Waited for 1.155564766s due to client-side throttling, not priority and fairness, request: GET:https://api.ocp1.k8s.work:6443/api/v1/nodes/cp01/proxy/logs/kube-apiserver/audit-2023-10-19T13-46-57.158.log
Failing tests:
[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]
Suite run returned error: 2 fail, 65 pass, 0 skip (7m29s)
results.txt
```
started: 0/1/67 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/2/67 "[sig-network] DNS should provide DNS for services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/3/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/4/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/5/67 "[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/6/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/7/67 "[sig-network] DNS should provide DNS for pods for Hostname [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/8/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/9/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/10/67 "[sig-network] DNS should support configurable pod DNS nameservers [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (21.5s) 2023-10-19T15:30:32 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/11/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (22.1s) 2023-10-19T15:30:33 "[sig-network] DNS should support configurable pod DNS nameservers [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/12/67 "[sig-network] Services should complete a service status lifecycle [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1.4s) 2023-10-19T15:30:34 "[sig-network] Services should complete a service status lifecycle [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/13/67 "[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (32.8s) 2023-10-19T15:30:43 "[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/14/67 "[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1.3s) 2023-10-19T15:30:45 "[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/15/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (34.2s) 2023-10-19T15:30:45 "[sig-network] DNS should provide DNS for pods for Hostname [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/16/67 "[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (39.7s) 2023-10-19T15:30:50 "[sig-network] DNS should provide DNS for services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/17/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (36.5s) 2023-10-19T15:31:21 "[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/18/67 "[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (50s) 2023-10-19T15:31:24 "[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/19/67 "[sig-network] DNS should provide DNS for ExternalName services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (8.3s) 2023-10-19T15:31:30 "[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/20/67 "[sig-network] NetworkPolicy API should support creating NetworkPolicy API operations [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1.3s) 2023-10-19T15:31:31 "[sig-network] NetworkPolicy API should support creating NetworkPolicy API operations [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/21/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m29s) 2023-10-19T15:31:39 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/22/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (18.5s) 2023-10-19T15:31:43 "[sig-network] DNS should provide DNS for ExternalName services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/23/67 "[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1m35s) 2023-10-19T15:31:45 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/24/67 "[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (8.4s) 2023-10-19T15:31:51 "[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/25/67 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (8.9s) 2023-10-19T15:31:54 "[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/26/67 "[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (8.5s) 2023-10-19T15:32:03 "[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/27/67 "[sig-network] DNS should provide DNS for the cluster [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (15.6s) 2023-10-19T15:32:07 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/28/67 "[sig-network] Services should test the lifecycle of an Endpoint [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1.1s) 2023-10-19T15:32:08 "[sig-network] Services should test the lifecycle of an Endpoint [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/29/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (5.2s) 2023-10-19T15:32:08 "[sig-network] DNS should provide DNS for the cluster [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/30/67 "[sig-network] Services should serve multiport endpoints from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (11.5s) 2023-10-19T15:32:19 "[sig-network] Services should serve multiport endpoints from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/31/67 "[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (2m15s) 2023-10-19T15:32:25 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/32/67 "[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (11.1s) 2023-10-19T15:32:30 "[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/33/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (7.8s) 2023-10-19T15:32:33 "[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/34/67 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (2m25s) 2023-10-19T15:32:35 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/35/67 "[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (3.3s) 2023-10-19T15:32:37 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/36/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m6s) 2023-10-19T15:32:37 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/37/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (2m29s) 2023-10-19T15:32:40 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/38/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (6s) 2023-10-19T15:32:41 "[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/39/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (2m14s) 2023-10-19T15:32:46 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/40/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m8s) 2023-10-19T15:32:47 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/41/67 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
Oct 19 15:30:50.972: INFO: Enabling in-tree volume drivers
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/test.go:58
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
set up framework | framework.go:178
STEP: Creating a kubernetes client 10/19/23 15:30:51.782
STEP: Building a namespace api object, basename network-policy 10/19/23 15:30:51.783
Oct 19 15:30:51.870: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace 10/19/23 15:30:52.027
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/19/23 15:30:52.031
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:31
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:72
[BeforeEach] NetworkPolicy between server and client
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:78
STEP: Creating a simple server that serves on port 80 and 81. 10/19/23 15:30:52.036
STEP: Creating a server pod server in namespace e2e-network-policy-9251 10/19/23 15:30:52.036
W1019 15:30:52.065206 1003 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "server-container-80", "server-container-81" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "server-container-80", "server-container-81" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "server-container-80", "server-container-81" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "server-container-80", "server-container-81" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Oct 19 15:30:52.065: INFO: Created pod server-jvr65
STEP: Creating a service svc-server for pod server in namespace e2e-network-policy-9251 10/19/23 15:30:52.065
Oct 19 15:30:52.095: INFO: Created service svc-server
STEP: Waiting for pod ready 10/19/23 15:30:52.095
Oct 19 15:30:52.095: INFO: Waiting up to 5m0s for pod "server-jvr65" in namespace "e2e-network-policy-9251" to be "running and ready"
Oct 19 15:30:52.106: INFO: Pod "server-jvr65": Phase="Pending", Reason="", readiness=false. Elapsed: 10.36297ms
Oct 19 15:30:52.106: INFO: The phase of Pod server-jvr65 is Pending, waiting for it to be Running (with Ready = true)
Oct 19 15:30:54.111: INFO: Pod "server-jvr65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015326203s
Oct 19 15:30:54.111: INFO: The phase of Pod server-jvr65 is Pending, waiting for it to be Running (with Ready = true)
Oct 19 15:30:56.111: INFO: Pod "server-jvr65": Phase="Running", Reason="", readiness=true. Elapsed: 4.015578759s
Oct 19 15:30:56.111: INFO: The phase of Pod server-jvr65 is Running (Ready = true)
Oct 19 15:30:56.111: INFO: Pod "server-jvr65" satisfied condition "running and ready"
STEP: Testing pods can connect to both ports when no policy is present. 10/19/23 15:30:56.111
STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. 10/19/23 15:30:56.111
W1019 15:30:56.237329 1003 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Oct 19 15:30:56.237: INFO: Waiting for client-can-connect-80-gxfqz to complete.
Oct 19 15:30:56.237: INFO: Waiting up to 3m0s for pod "client-can-connect-80-gxfqz" in namespace "e2e-network-policy-9251" to be "completed"
Oct 19 15:30:56.438: INFO: Pod "client-can-connect-80-gxfqz": Phase="Pending", Reason="", readiness=false. Elapsed: 201.411818ms
Oct 19 15:30:58.444: INFO: Pod "client-can-connect-80-gxfqz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207461315s
Oct 19 15:31:00.446: INFO: Pod "client-can-connect-80-gxfqz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209467723s
Oct 19 15:31:02.446: INFO: Pod "client-can-connect-80-gxfqz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209074138s
Oct 19 15:31:02.446: INFO: Pod "client-can-connect-80-gxfqz" satisfied condition "completed"
Oct 19 15:31:02.446: INFO: Waiting for client-can-connect-80-gxfqz to complete.
Oct 19 15:31:02.446: INFO: Waiting up to 5m0s for pod "client-can-connect-80-gxfqz" in namespace "e2e-network-policy-9251" to be "Succeeded or Failed"
Oct 19 15:31:02.453: INFO: Pod "client-can-connect-80-gxfqz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.93149ms
STEP: Saw pod success 10/19/23 15:31:02.453
Oct 19 15:31:02.453: INFO: Pod "client-can-connect-80-gxfqz" satisfied condition "Succeeded or Failed"
STEP: Cleaning up the pod client-can-connect-80-gxfqz 10/19/23 15:31:02.453
STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. 10/19/23 15:31:02.469
W1019 15:31:02.477504 1003 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Oct 19 15:31:02.477: INFO: Waiting for client-can-connect-81-86hs8 to complete.
Oct 19 15:31:02.477: INFO: Waiting up to 3m0s for pod "client-can-connect-81-86hs8" in namespace "e2e-network-policy-9251" to be "completed"
Oct 19 15:31:02.482: INFO: Pod "client-can-connect-81-86hs8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432086ms
Oct 19 15:31:04.488: INFO: Pod "client-can-connect-81-86hs8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010707917s
Oct 19 15:31:06.488: INFO: Pod "client-can-connect-81-86hs8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010430141s
Oct 19 15:31:06.488: INFO: Pod "client-can-connect-81-86hs8" satisfied condition "completed"
Oct 19 15:31:06.488: INFO: Waiting for client-can-connect-81-86hs8 to complete.
Oct 19 15:31:06.488: INFO: Waiting up to 5m0s for pod "client-can-connect-81-86hs8" in namespace "e2e-network-policy-9251" to be "Succeeded or Failed"
Oct 19 15:31:06.493: INFO: Pod "client-can-connect-81-86hs8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.96631ms
STEP: Saw pod success 10/19/23 15:31:06.493
Oct 19 15:31:06.493: INFO: Pod "client-can-connect-81-86hs8" satisfied condition "Succeeded or Failed"
STEP: Cleaning up the pod client-can-connect-81-86hs8 10/19/23 15:31:06.493
[It] should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1477
STEP: Creating client-a which should not be able to contact the server. 10/19/23 15:31:06.524
STEP: Creating client pod client-a that should not be able to connect to svc-server. 10/19/23 15:31:06.524
W1019 15:31:06.533687 1003 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Oct 19 15:31:06.533: INFO: Waiting for client-a-2krxp to complete.
Oct 19 15:31:06.533: INFO: Waiting up to 5m0s for pod "client-a-2krxp" in namespace "e2e-network-policy-9251" to be "Succeeded or Failed"
Oct 19 15:31:06.538: INFO: Pod "client-a-2krxp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.868111ms
Oct 19 15:31:08.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 2.010977012s
Oct 19 15:31:10.546: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 4.012221183s
Oct 19 15:31:12.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 6.01032889s
Oct 19 15:31:14.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 8.010548099s
Oct 19 15:31:16.543: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 10.010059935s
Oct 19 15:31:18.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 12.010629186s
Oct 19 15:31:20.726: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 14.192780641s
Oct 19 15:31:22.548: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 16.014559411s
Oct 19 15:31:24.543: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 18.009944112s
Oct 19 15:31:26.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 20.010641123s
Oct 19 15:31:28.545: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 22.011447235s
Oct 19 15:31:30.545: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 24.012089791s
Oct 19 15:31:32.549: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 26.015487785s
Oct 19 15:31:34.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 28.010508057s
Oct 19 15:31:36.546: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 30.01228255s
Oct 19 15:31:38.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 32.010414791s
Oct 19 15:31:40.545: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 34.01182312s
Oct 19 15:31:42.548: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 36.014894484s
Oct 19 15:31:44.545: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 38.011381785s
Oct 19 15:31:46.549: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 40.015748477s
Oct 19 15:31:48.545: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 42.01176868s
Oct 19 15:31:50.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 44.011173169s
Oct 19 15:31:52.548: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 46.015026207s
Oct 19 15:31:54.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=false. Elapsed: 48.010917573s
Oct 19 15:31:56.543: INFO: Pod "client-a-2krxp": Phase="Failed", Reason="", readiness=false. Elapsed: 50.009832587s
STEP: Cleaning up the pod client-a-2krxp 10/19/23 15:31:56.543
STEP: Creating client-a which should now be able to contact the server. 10/19/23 15:31:56.577
STEP: Creating client pod client-a that should successfully connect to svc-server. 10/19/23 15:31:56.577
W1019 15:31:56.590709 1003 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Oct 19 15:31:56.590: INFO: Waiting for client-a-bn9h5 to complete.
Oct 19 15:31:56.590: INFO: Waiting up to 3m0s for pod "client-a-bn9h5" in namespace "e2e-network-policy-9251" to be "completed"
Oct 19 15:31:56.596: INFO: Pod "client-a-bn9h5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.401813ms
Oct 19 15:31:58.601: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 2.010880863s
Oct 19 15:32:00.603: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 4.012742406s
Oct 19 15:32:02.603: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 6.012510975s
Oct 19 15:32:04.603: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 8.012453461s
Oct 19 15:32:06.601: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 10.010864297s
Oct 19 15:32:08.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 12.011208398s
Oct 19 15:32:10.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 14.011648218s
Oct 19 15:32:12.605: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 16.014618848s
Oct 19 15:32:14.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 18.011482845s
Oct 19 15:32:16.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 20.011476571s
Oct 19 15:32:18.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 22.011281889s
Oct 19 15:32:20.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 24.011593907s
Oct 19 15:32:22.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 26.011161648s
Oct 19 15:32:24.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 28.011463763s
Oct 19 15:32:26.600: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 30.009771048s
Oct 19 15:32:28.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 32.011942956s
Oct 19 15:32:30.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 34.011450174s
Oct 19 15:32:32.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 36.011454271s
Oct 19 15:32:34.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 38.0119004s
Oct 19 15:32:36.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 40.011646434s
Oct 19 15:32:38.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 42.01163537s
Oct 19 15:32:40.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 44.011685131s
Oct 19 15:32:42.601: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 46.01106771s
Oct 19 15:32:44.604: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=false. Elapsed: 48.013484903s
Oct 19 15:32:46.609: INFO: Pod "client-a-bn9h5": Phase="Failed", Reason="", readiness=false. Elapsed: 50.018335382s
Oct 19 15:32:46.609: INFO: Pod "client-a-bn9h5" satisfied condition "completed"
Oct 19 15:32:46.609: INFO: Waiting for client-a-bn9h5 to complete.
Oct 19 15:32:46.609: INFO: Waiting up to 5m0s for pod "client-a-bn9h5" in namespace "e2e-network-policy-9251" to be "Succeeded or Failed"
Oct 19 15:32:46.615: INFO: Pod "client-a-bn9h5": Phase="Failed", Reason="", readiness=false. Elapsed: 6.235765ms
Oct 19 15:32:46.620: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-9251 describe po client-a-bn9h5'
Oct 19 15:32:46.805: INFO: stderr: ""
Oct 19 15:32:46.805: INFO: stdout: "Name: client-a-bn9h5\nNamespace: e2e-network-policy-9251\nPriority: 0\nService Account: default\nNode: worker03/192.168.200.33\nStart Time: Thu, 19 Oct 2023 15:31:56 +0000\nLabels: pod-name=client-a\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"portmap\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::50a\",\n \"10.128.10.171\"\n ],\n \"mac\": \"0a:44:4b:68:32:63\",\n \"default\": true,\n \"dns\": {},\n \"gateway\": [\n \"fd00::562\",\n \"10.128.10.226\"\n ]\n }]\nStatus: Failed\nIP: 10.128.10.171\nIPs:\n IP: 10.128.10.171\n IP: fd00::50a\nContainers:\n client:\n Container ID: cri-o://0f1612eeff4d6c474b6587dd8090733d0ba68204ccdab4c87da3a9a5c1101971\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: \n Host Port: \n Command:\n /bin/sh\n Args:\n -c\n for i in $(seq 1 5); do /agnhost connect 172.30.179.227:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1\n State: Terminated\n Reason: Error\n Exit Code: 1\n Started: Thu, 19 Oct 2023 15:31:57 +0000\n Finished: Thu, 19 Oct 2023 15:32:42 +0000\n Ready: False\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qh8ns (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-qh8ns:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-9251/client-a-bn9h5 to worker03\n Normal AddedInterface 49s multus Add eth0 [fd00::50a/128 10.128.10.171/32] from portmap\n Normal Pulled 49s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 49s kubelet Created container client\n Normal Started 49s kubelet Started container client\n"
Oct 19 15:32:46.805: INFO:
Output of kubectl describe client-a-bn9h5:
Name: client-a-bn9h5
Namespace: e2e-network-policy-9251
Priority: 0
Service Account: default
Node: worker03/192.168.200.33
Start Time: Thu, 19 Oct 2023 15:31:56 +0000
Labels: pod-name=client-a
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "portmap",
"interface": "eth0",
"ips": [
"fd00::50a",
"10.128.10.171"
],
"mac": "0a:44:4b:68:32:63",
"default": true,
"dns": {},
"gateway": [
"fd00::562",
"10.128.10.226"
]
}]
Status: Failed
IP: 10.128.10.171
IPs:
IP: 10.128.10.171
IP: fd00::50a
Containers:
client:
Container ID: cri-o://0f1612eeff4d6c474b6587dd8090733d0ba68204ccdab4c87da3a9a5c1101971
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port:
Host Port:
Command:
/bin/sh
Args:
-c
for i in $(seq 1 5); do /agnhost connect 172.30.179.227:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1
State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 19 Oct 2023 15:31:57 +0000
Finished: Thu, 19 Oct 2023 15:32:42 +0000
Ready: False
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qh8ns (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-qh8ns:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional:
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-9251/client-a-bn9h5 to worker03
Normal AddedInterface 49s multus Add eth0 [fd00::50a/128 10.128.10.171/32] from portmap
Normal Pulled 49s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 49s kubelet Created container client
Normal Started 49s kubelet Started container client
Oct 19 15:32:46.806: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-9251 logs client-a-bn9h5 --tail=100'
Oct 19 15:32:46.967: INFO: stderr: ""
Oct 19 15:32:46.967: INFO: stdout: "TIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\n"
Oct 19 15:32:46.967: INFO:
Last 100 log lines of client-a-bn9h5:
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
Oct 19 15:32:46.967: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-9251 describe po server-jvr65'
Oct 19 15:32:47.117: INFO: stderr: ""
Oct 19 15:32:47.117: INFO: stdout: "Name: server-jvr65\nNamespace: e2e-network-policy-9251\nPriority: 0\nService Account: default\nNode: worker02/192.168.200.32\nStart Time: Thu, 19 Oct 2023 15:30:52 +0000\nLabels: pod-name=server\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"portmap\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::4ec\",\n \"10.128.8.234\"\n ],\n \"mac\": \"1a:0d:3e:73:b0:5d\",\n \"default\": true,\n \"dns\": {},\n \"gateway\": [\n \"fd00::415\",\n \"10.128.9.247\"\n ]\n }]\nStatus: Running\nIP: 10.128.8.234\nIPs:\n IP: 10.128.8.234\n IP: fd00::4ec\nContainers:\n server-container-80:\n Container ID: cri-o://1d6736cf938e5ea755abb6b894032556e1fa3216fe01815f49254b57ae49a5c9\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Thu, 19 Oct 2023 15:30:52 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j48sq (ro)\n server-container-81:\n Container ID: cri-o://0fd26427ae7c3ea0780dcd3665871f4f026a0de1d74672a7f596c0c21f3ebe7c\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 81/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Thu, 19 Oct 2023 15:30:53 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_81: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j48sq (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-j48sq:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 115s default-scheduler Successfully assigned e2e-network-policy-9251/server-jvr65 to worker02\n Normal AddedInterface 115s multus Add eth0 [fd00::4ec/128 10.128.8.234/32] from portmap\n Normal Pulled 115s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 115s kubelet Created container server-container-80\n Normal Started 115s kubelet Started container server-container-80\n Normal Pulled 115s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 114s kubelet Created container server-container-81\n Normal Started 114s kubelet Started container server-container-81\n"
Oct 19 15:32:47.117: INFO:
Output of kubectl describe server-jvr65:
Name: server-jvr65
Namespace: e2e-network-policy-9251
Priority: 0
Service Account: default
Node: worker02/192.168.200.32
Start Time: Thu, 19 Oct 2023 15:30:52 +0000
Labels: pod-name=server
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "portmap",
"interface": "eth0",
"ips": [
"fd00::4ec",
"10.128.8.234"
],
"mac": "1a:0d:3e:73:b0:5d",
"default": true,
"dns": {},
"gateway": [
"fd00::415",
"10.128.9.247"
]
}]
Status: Running
IP: 10.128.8.234
IPs:
IP: 10.128.8.234
IP: fd00::4ec
Containers:
server-container-80:
Container ID: cri-o://1d6736cf938e5ea755abb6b894032556e1fa3216fe01815f49254b57ae49a5c9
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: 80/TCP
Host Port: 0/TCP
Args:
porter
State: Running
Started: Thu, 19 Oct 2023 15:30:52 +0000
Ready: True
Restart Count: 0
Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVE_PORT_80: foo
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j48sq (ro)
server-container-81:
Container ID: cri-o://0fd26427ae7c3ea0780dcd3665871f4f026a0de1d74672a7f596c0c21f3ebe7c
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: 81/TCP
Host Port: 0/TCP
Args:
porter
State: Running
Started: Thu, 19 Oct 2023 15:30:53 +0000
Ready: True
Restart Count: 0
Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVE_PORT_81: foo
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j48sq (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-j48sq:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional:
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 115s default-scheduler Successfully assigned e2e-network-policy-9251/server-jvr65 to worker02
Normal AddedInterface 115s multus Add eth0 [fd00::4ec/128 10.128.8.234/32] from portmap
Normal Pulled 115s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 115s kubelet Created container server-container-80
Normal Started 115s kubelet Started container server-container-80
Normal Pulled 115s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 114s kubelet Created container server-container-81
Normal Started 114s kubelet Started container server-container-81
Oct 19 15:32:47.117: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-9251 logs server-jvr65 --tail=100'
Oct 19 15:32:47.350: INFO: stderr: "Defaulted container \"server-container-80\" out of: server-container-80, server-container-81\n"
Oct 19 15:32:47.350: INFO: stdout: ""
Oct 19 15:32:47.350: INFO:
Last 100 log lines of server-jvr65:
Oct 19 15:32:47.377: FAIL: Pod client-a-bn9h5 should be able to connect to service svc-server, but was not able to connect.
Pod logs:
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
Current NetworkPolicies:
[{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-9251 08208328-6fc9-440c-a036-163c34001cb9 144630 1 2023-10-19 15:31:56 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-10-19 15:31:56 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.8.234/32,Except:[],}}]}] [Egress]} {[]}} {{ } {deny-client-a-via-except-cidr-egress-rule e2e-network-policy-9251 d688b447-b3eb-4420-9fa3-8acd2bcd22d8 142871 1 2023-10-19 15:31:06 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-10-19 15:31:06 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.8.0/24,Except:[10.128.8.234/32],}}]}] [Egress]} {[]}}]
Pods:
[Pod: client-a-bn9h5, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:31:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:43 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:43 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:31:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.10.171,StartTime:2023-10-19 15:31:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-10-19 15:31:57 +0000 UTC,FinishedAt:2023-10-19 15:32:42 +0000 UTC,ContainerID:cri-o://0f1612eeff4d6c474b6587dd8090733d0ba68204ccdab4c87da3a9a5c1101971,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://0f1612eeff4d6c474b6587dd8090733d0ba68204ccdab4c87da3a9a5c1101971,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.10.171,},PodIP{IP:fd00::50a,},},EphemeralContainerStatuses:[]ContainerStatus{},}
Pod: server-jvr65, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:30:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:30:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:30:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:30:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.8.234,StartTime:2023-10-19 15:30:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-19 15:30:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://1d6736cf938e5ea755abb6b894032556e1fa3216fe01815f49254b57ae49a5c9,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-19 15:30:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://0fd26427ae7c3ea0780dcd3665871f4f026a0de1d74672a7f596c0c21f3ebe7c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.234,},PodIP{IP:fd00::4ec,},},EphemeralContainerStatuses:[]ContainerStatus{},}
]
Full Stack Trace
k8s.io/kubernetes/test/e2e/network/netpol.checkConnectivity(0xc001b58b40, 0xc0017aa9a0, 0xc005183680, 0xc0042d9180)
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941 +0x355
k8s.io/kubernetes/test/e2e/network/netpol.testCanConnectProtocol(0xc001b58b40, 0xc0017aa9a0, {0x8a3ce56, 0x8}, 0xc0042d9180, 0xc001bd2e00?, {0x8a2e80e, 0x3})
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1913 +0x1be
k8s.io/kubernetes/test/e2e/network/netpol.testCanConnect(...)
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1897
k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.29.2()
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1569 +0x47
github.com/onsi/ginkgo/v2.By({0x8c2a02f, 0x41}, {0xc005417e50, 0x1, 0x0?})
github.com/onsi/ginkgo/v2@v2.4.0/core_dsl.go:535 +0x525
k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.29()
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1568 +0xb5b
github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2e8fa1e, 0xc000e92f00})
github.com/onsi/ginkgo/v2@v2.4.0/internal/node.go:449 +0x1b
github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2()
github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:757 +0x98
created by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:745 +0xe3d
STEP: Cleaning up the pod client-a-bn9h5 10/19/23 15:32:47.377
STEP: Cleaning up the policy. 10/19/23 15:32:47.401
[AfterEach] NetworkPolicy between server and client
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:96
STEP: Cleaning up the server. 10/19/23 15:32:47.421
STEP: Cleaning up the server's service. 10/19/23 15:32:47.444
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:33
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
dump namespaces | framework.go:196
STEP: dump namespace information after failure 10/19/23 15:32:47.485
STEP: Collecting events from namespace "e2e-network-policy-9251". 10/19/23 15:32:47.485
STEP: Found 30 events. 10/19/23 15:32:47.511
Oct 19 15:32:47.512: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-2krxp: { } Scheduled: Successfully assigned e2e-network-policy-9251/client-a-2krxp to worker03
Oct 19 15:32:47.512: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-bn9h5: { } Scheduled: Successfully assigned e2e-network-policy-9251/client-a-bn9h5 to worker03
Oct 19 15:32:47.512: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-gxfqz: { } Scheduled: Successfully assigned e2e-network-policy-9251/client-can-connect-80-gxfqz to worker03
Oct 19 15:32:47.512: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-86hs8: { } Scheduled: Successfully assigned e2e-network-policy-9251/client-can-connect-81-86hs8 to worker03
Oct 19 15:32:47.512: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-jvr65: { } Scheduled: Successfully assigned e2e-network-policy-9251/server-jvr65 to worker02
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:30:52 +0000 UTC - event for server-jvr65: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:30:52 +0000 UTC - event for server-jvr65: {kubelet worker02} Created: Created container server-container-80
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:30:52 +0000 UTC - event for server-jvr65: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:30:52 +0000 UTC - event for server-jvr65: {multus } AddedInterface: Add eth0 [fd00::4ec/128 10.128.8.234/32] from portmap
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:30:52 +0000 UTC - event for server-jvr65: {kubelet worker02} Started: Started container server-container-80
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:30:53 +0000 UTC - event for server-jvr65: {kubelet worker02} Created: Created container server-container-81
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:30:53 +0000 UTC - event for server-jvr65: {kubelet worker02} Started: Started container server-container-81
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:30:57 +0000 UTC - event for client-can-connect-80-gxfqz: {multus } AddedInterface: Add eth0 [fd00::550/128 10.128.11.149/32] from portmap
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:30:58 +0000 UTC - event for client-can-connect-80-gxfqz: {kubelet worker03} Started: Started container client
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:30:58 +0000 UTC - event for client-can-connect-80-gxfqz: {kubelet worker03} Created: Created container client
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:30:58 +0000 UTC - event for client-can-connect-80-gxfqz: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:31:03 +0000 UTC - event for client-can-connect-81-86hs8: {multus } AddedInterface: Add eth0 [fd00::5fc/128 10.128.11.215/32] from portmap
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:31:03 +0000 UTC - event for client-can-connect-81-86hs8: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:31:03 +0000 UTC - event for client-can-connect-81-86hs8: {kubelet worker03} Created: Created container client
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:31:03 +0000 UTC - event for client-can-connect-81-86hs8: {kubelet worker03} Started: Started container client
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:31:07 +0000 UTC - event for client-a-2krxp: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:31:07 +0000 UTC - event for client-a-2krxp: {kubelet worker03} Started: Started container client
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:31:07 +0000 UTC - event for client-a-2krxp: {kubelet worker03} Created: Created container client
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:31:07 +0000 UTC - event for client-a-2krxp: {multus } AddedInterface: Add eth0 [fd00::5f8/128 10.128.11.138/32] from portmap
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:31:57 +0000 UTC - event for client-a-bn9h5: {kubelet worker03} Created: Created container client
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:31:57 +0000 UTC - event for client-a-bn9h5: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:31:57 +0000 UTC - event for client-a-bn9h5: {multus } AddedInterface: Add eth0 [fd00::50a/128 10.128.10.171/32] from portmap
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:31:57 +0000 UTC - event for client-a-bn9h5: {kubelet worker03} Started: Started container client
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:32:47 +0000 UTC - event for server-jvr65: {kubelet worker02} Killing: Stopping container server-container-80
Oct 19 15:32:47.512: INFO: At 2023-10-19 15:32:47 +0000 UTC - event for server-jvr65: {kubelet worker02} Killing: Stopping container server-container-81
Oct 19 15:32:47.518: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 19 15:32:47.518: INFO: server-jvr65 worker02 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-19 15:30:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-10-19 15:30:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-10-19 15:30:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-19 15:30:52 +0000 UTC }]
Oct 19 15:32:47.518: INFO:
Oct 19 15:32:47.526: INFO: skipping dumping cluster info - cluster too large
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
tear down framework | framework.go:193
STEP: Destroying namespace "e2e-network-policy-9251" for this suite. 10/19/23 15:32:47.526
fail [k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941]: Oct 19 15:32:47.377: Pod client-a-bn9h5 should be able to connect to service svc-server, but was not able to connect.
Pod logs:
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
Current NetworkPolicies:
[{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-9251 08208328-6fc9-440c-a036-163c34001cb9 144630 1 2023-10-19 15:31:56 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-10-19 15:31:56 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.8.234/32,Except:[],}}]}] [Egress]} {[]}} {{ } {deny-client-a-via-except-cidr-egress-rule e2e-network-policy-9251 d688b447-b3eb-4420-9fa3-8acd2bcd22d8 142871 1 2023-10-19 15:31:06 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-10-19 15:31:06 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.8.0/24,Except:[10.128.8.234/32],}}]}] [Egress]} {[]}}]
Pods:
[Pod: client-a-bn9h5, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:31:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:43 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:43 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:31:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.10.171,StartTime:2023-10-19 15:31:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-10-19 15:31:57 +0000 UTC,FinishedAt:2023-10-19 15:32:42 +0000 UTC,ContainerID:cri-o://0f1612eeff4d6c474b6587dd8090733d0ba68204ccdab4c87da3a9a5c1101971,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://0f1612eeff4d6c474b6587dd8090733d0ba68204ccdab4c87da3a9a5c1101971,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.10.171,},PodIP{IP:fd00::50a,},},EphemeralContainerStatuses:[]ContainerStatus{},}
Pod: server-jvr65, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:30:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:30:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:30:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:30:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.8.234,StartTime:2023-10-19 15:30:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-19 15:30:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://1d6736cf938e5ea755abb6b894032556e1fa3216fe01815f49254b57ae49a5c9,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-19 15:30:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://0fd26427ae7c3ea0780dcd3665871f4f026a0de1d74672a7f596c0c21f3ebe7c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.234,},PodIP{IP:fd00::4ec,},},EphemeralContainerStatuses:[]ContainerStatus{},}
]
Ginkgo exit error 1: exit with code 1
failed: (1m57s) 2023-10-19T15:32:47 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 1/42/67 "[sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (2m4s) 2023-10-19T15:32:48 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 1/43/67 "[sig-network] Ingress API should support creating Ingress API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1.3s) 2023-10-19T15:32:50 "[sig-network] Ingress API should support creating Ingress API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 1/44/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (3.4s) 2023-10-19T15:32:50 "[sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 1/45/67 "[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (19.6s) 2023-10-19T15:32:56 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 1/46/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (7.3s) 2023-10-19T15:32:58 "[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 1/47/67 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (15.9s) 2023-10-19T15:33:03 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 1/48/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (25.6s) 2023-10-19T15:33:23 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 1/49/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m6s) 2023-10-19T15:33:47 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 1/50/67 "[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1m10s) 2023-10-19T15:33:49 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 1/51/67 "[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (3.2s) 2023-10-19T15:33:50 "[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 1/52/67 "[sig-network] IngressClass API should support creating IngressClass API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1.4s) 2023-10-19T15:33:51 "[sig-network] IngressClass API should support creating IngressClass API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 1/53/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m22s) 2023-10-19T15:33:59 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 1/54/67 "[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] [Serial:Self] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (11.1s) 2023-10-19T15:34:00 "[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 1/55/67 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1m18s) 2023-10-19T15:34:04 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 1/56/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
Oct 19 15:32:08.396: INFO: Enabling in-tree volume drivers
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/test.go:58
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
set up framework | framework.go:178
STEP: Creating a kubernetes client 10/19/23 15:32:09.165
STEP: Building a namespace api object, basename network-policy 10/19/23 15:32:09.166
Oct 19 15:32:09.212: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace 10/19/23 15:32:09.425
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/19/23 15:32:09.43
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:31
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:72
[BeforeEach] NetworkPolicy between server and client
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:78
STEP: Creating a simple server that serves on port 80 and 81. 10/19/23 15:32:09.435
STEP: Creating a server pod server in namespace e2e-network-policy-397 10/19/23 15:32:09.435
W1019 15:32:09.456312 1888 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "server-container-80", "server-container-81" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "server-container-80", "server-container-81" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "server-container-80", "server-container-81" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "server-container-80", "server-container-81" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Oct 19 15:32:09.456: INFO: Created pod server-d9pg9
STEP: Creating a service svc-server for pod server in namespace e2e-network-policy-397 10/19/23 15:32:09.456
Oct 19 15:32:09.488: INFO: Created service svc-server
STEP: Waiting for pod ready 10/19/23 15:32:09.488
Oct 19 15:32:09.488: INFO: Waiting up to 5m0s for pod "server-d9pg9" in namespace "e2e-network-policy-397" to be "running and ready"
Oct 19 15:32:09.511: INFO: Pod "server-d9pg9": Phase="Pending", Reason="", readiness=false. Elapsed: 23.048269ms
Oct 19 15:32:09.511: INFO: The phase of Pod server-d9pg9 is Pending, waiting for it to be Running (with Ready = true)
Oct 19 15:32:11.521: INFO: Pod "server-d9pg9": Phase="Running", Reason="", readiness=false. Elapsed: 2.033017588s
Oct 19 15:32:11.521: INFO: The phase of Pod server-d9pg9 is Running (Ready = false)
Oct 19 15:32:13.520: INFO: Pod "server-d9pg9": Phase="Running", Reason="", readiness=true. Elapsed: 4.031897551s
Oct 19 15:32:13.520: INFO: The phase of Pod server-d9pg9 is Running (Ready = true)
Oct 19 15:32:13.520: INFO: Pod "server-d9pg9" satisfied condition "running and ready"
STEP: Testing pods can connect to both ports when no policy is present. 10/19/23 15:32:13.52
STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. 10/19/23 15:32:13.52
W1019 15:32:13.535224 1888 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Oct 19 15:32:13.535: INFO: Waiting for client-can-connect-80-z75l9 to complete.
Oct 19 15:32:13.535: INFO: Waiting up to 3m0s for pod "client-can-connect-80-z75l9" in namespace "e2e-network-policy-397" to be "completed"
Oct 19 15:32:13.540: INFO: Pod "client-can-connect-80-z75l9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.341753ms
Oct 19 15:32:15.546: INFO: Pod "client-can-connect-80-z75l9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01138622s
Oct 19 15:32:17.546: INFO: Pod "client-can-connect-80-z75l9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010773307s
Oct 19 15:32:17.546: INFO: Pod "client-can-connect-80-z75l9" satisfied condition "completed"
Oct 19 15:32:17.546: INFO: Waiting for client-can-connect-80-z75l9 to complete.
Oct 19 15:32:17.546: INFO: Waiting up to 5m0s for pod "client-can-connect-80-z75l9" in namespace "e2e-network-policy-397" to be "Succeeded or Failed"
Oct 19 15:32:17.551: INFO: Pod "client-can-connect-80-z75l9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.904419ms
STEP: Saw pod success 10/19/23 15:32:17.551
Oct 19 15:32:17.551: INFO: Pod "client-can-connect-80-z75l9" satisfied condition "Succeeded or Failed"
STEP: Cleaning up the pod client-can-connect-80-z75l9 10/19/23 15:32:17.551
STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. 10/19/23 15:32:17.567
W1019 15:32:17.578278 1888 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Oct 19 15:32:17.578: INFO: Waiting for client-can-connect-81-p7k82 to complete.
Oct 19 15:32:17.578: INFO: Waiting up to 3m0s for pod "client-can-connect-81-p7k82" in namespace "e2e-network-policy-397" to be "completed"
Oct 19 15:32:17.585: INFO: Pod "client-can-connect-81-p7k82": Phase="Pending", Reason="", readiness=false. Elapsed: 7.425394ms
Oct 19 15:32:19.590: INFO: Pod "client-can-connect-81-p7k82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011986985s
Oct 19 15:32:21.591: INFO: Pod "client-can-connect-81-p7k82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01289904s
Oct 19 15:32:21.591: INFO: Pod "client-can-connect-81-p7k82" satisfied condition "completed"
Oct 19 15:32:21.591: INFO: Waiting for client-can-connect-81-p7k82 to complete.
Oct 19 15:32:21.591: INFO: Waiting up to 5m0s for pod "client-can-connect-81-p7k82" in namespace "e2e-network-policy-397" to be "Succeeded or Failed"
Oct 19 15:32:21.595: INFO: Pod "client-can-connect-81-p7k82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.654663ms
STEP: Saw pod success 10/19/23 15:32:21.595
Oct 19 15:32:21.595: INFO: Pod "client-can-connect-81-p7k82" satisfied condition "Succeeded or Failed"
STEP: Cleaning up the pod client-can-connect-81-p7k82 10/19/23 15:32:21.595
[It] should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1343
STEP: Creating a server pod pod-b in namespace e2e-network-policy-397 10/19/23 15:32:21.632
W1019 15:32:21.644448 1888 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-b-container-80" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-b-container-80" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-b-container-80" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-b-container-80" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Oct 19 15:32:21.644: INFO: Created pod pod-b-5xrgl
STEP: Creating a service svc-pod-b for pod pod-b in namespace e2e-network-policy-397 10/19/23 15:32:21.644
Oct 19 15:32:21.668: INFO: Created service svc-pod-b
STEP: Waiting for pod-b to be ready 10/19/23 15:32:21.668
Oct 19 15:32:21.668: INFO: Waiting up to 5m0s for pod "pod-b-5xrgl" in namespace "e2e-network-policy-397" to be "running and ready"
Oct 19 15:32:21.674: INFO: Pod "pod-b-5xrgl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037625ms
Oct 19 15:32:21.674: INFO: The phase of Pod pod-b-5xrgl is Pending, waiting for it to be Running (with Ready = true)
Oct 19 15:32:23.680: INFO: Pod "pod-b-5xrgl": Phase="Running", Reason="", readiness=true. Elapsed: 2.012067696s
Oct 19 15:32:23.680: INFO: The phase of Pod pod-b-5xrgl is Running (Ready = true)
Oct 19 15:32:23.680: INFO: Pod "pod-b-5xrgl" satisfied condition "running and ready"
Oct 19 15:32:23.680: INFO: Waiting up to 5m0s for pod "pod-b-5xrgl" in namespace "e2e-network-policy-397" to be "running"
Oct 19 15:32:23.684: INFO: Pod "pod-b-5xrgl": Phase="Running", Reason="", readiness=true. Elapsed: 3.706866ms
Oct 19 15:32:23.684: INFO: Pod "pod-b-5xrgl" satisfied condition "running"
STEP: Creating client-a which should be able to contact the server-b. 10/19/23 15:32:23.684
STEP: Creating client pod client-a that should successfully connect to svc-pod-b. 10/19/23 15:32:23.684
W1019 15:32:23.692498 1888 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Oct 19 15:32:23.692: INFO: Waiting for client-a-jppc4 to complete.
Oct 19 15:32:23.692: INFO: Waiting up to 3m0s for pod "client-a-jppc4" in namespace "e2e-network-policy-397" to be "completed"
Oct 19 15:32:23.697: INFO: Pod "client-a-jppc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.456681ms
Oct 19 15:32:25.701: INFO: Pod "client-a-jppc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008918889s
Oct 19 15:32:27.704: INFO: Pod "client-a-jppc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012355246s
Oct 19 15:32:27.704: INFO: Pod "client-a-jppc4" satisfied condition "completed"
Oct 19 15:32:27.704: INFO: Waiting for client-a-jppc4 to complete.
Oct 19 15:32:27.704: INFO: Waiting up to 5m0s for pod "client-a-jppc4" in namespace "e2e-network-policy-397" to be "Succeeded or Failed"
Oct 19 15:32:27.708: INFO: Pod "client-a-jppc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.842171ms
STEP: Saw pod success 10/19/23 15:32:27.708
Oct 19 15:32:27.708: INFO: Pod "client-a-jppc4" satisfied condition "Succeeded or Failed"
STEP: Cleaning up the pod client-a-jppc4 10/19/23 15:32:27.708
STEP: Creating client-a which should not be able to contact the server-b. 10/19/23 15:32:27.733
STEP: Creating client pod client-a that should not be able to connect to svc-pod-b. 10/19/23 15:32:27.733
W1019 15:32:27.743101 1888 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Oct 19 15:32:27.743: INFO: Waiting for client-a-47d7l to complete.
Oct 19 15:32:27.743: INFO: Waiting up to 5m0s for pod "client-a-47d7l" in namespace "e2e-network-policy-397" to be "Succeeded or Failed"
Oct 19 15:32:27.749: INFO: Pod "client-a-47d7l": Phase="Pending", Reason="", readiness=false. Elapsed: 5.847505ms
Oct 19 15:32:29.754: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 2.011609004s
Oct 19 15:32:31.756: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 4.012831272s
Oct 19 15:32:33.755: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 6.012008932s
Oct 19 15:32:35.758: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 8.015246773s
Oct 19 15:32:37.754: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 10.011630571s
Oct 19 15:32:39.753: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 12.010617213s
Oct 19 15:32:41.756: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 14.013444501s
Oct 19 15:32:43.755: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 16.012393233s
Oct 19 15:32:45.755: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 18.012355127s
Oct 19 15:32:47.764: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 20.021065613s
Oct 19 15:32:49.756: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 22.013044961s
Oct 19 15:32:51.754: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 24.011395491s
Oct 19 15:32:53.759: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 26.016144355s
Oct 19 15:32:55.754: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 28.011627744s
Oct 19 15:32:57.759: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 30.016098146s
Oct 19 15:32:59.755: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 32.012030261s
Oct 19 15:33:01.753: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 34.010552322s
Oct 19 15:33:03.755: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 36.012392851s
Oct 19 15:33:05.753: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 38.010726196s
Oct 19 15:33:07.753: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 40.010594656s
Oct 19 15:33:09.754: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 42.011155175s
Oct 19 15:33:11.755: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 44.012157666s
Oct 19 15:33:13.754: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=true. Elapsed: 46.011747925s
Oct 19 15:33:15.755: INFO: Pod "client-a-47d7l": Phase="Running", Reason="", readiness=false. Elapsed: 48.012428737s
Oct 19 15:33:17.756: INFO: Pod "client-a-47d7l": Phase="Failed", Reason="", readiness=false. Elapsed: 50.013732575s
STEP: Cleaning up the pod client-a-47d7l 10/19/23 15:33:17.757
STEP: Creating client-a which should be able to contact the server. 10/19/23 15:33:17.775
STEP: Creating client pod client-a that should successfully connect to svc-server. 10/19/23 15:33:17.775
W1019 15:33:17.795719 1888 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Oct 19 15:33:17.795: INFO: Waiting for client-a-8f5jg to complete.
Oct 19 15:33:17.795: INFO: Waiting up to 3m0s for pod "client-a-8f5jg" in namespace "e2e-network-policy-397" to be "completed"
Oct 19 15:33:17.800: INFO: Pod "client-a-8f5jg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.576871ms
Oct 19 15:33:19.805: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 2.010122037s
Oct 19 15:33:21.806: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 4.010502601s
Oct 19 15:33:23.804: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 6.008958398s
Oct 19 15:33:25.805: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 8.01012377s
Oct 19 15:33:27.805: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 10.010116817s
Oct 19 15:33:29.805: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 12.009767022s
Oct 19 15:33:31.806: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 14.010776478s
Oct 19 15:33:33.805: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 16.009860414s
Oct 19 15:33:35.806: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 18.010415161s
Oct 19 15:33:37.805: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 20.009393608s
Oct 19 15:33:39.807: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 22.011942717s
Oct 19 15:33:41.807: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 24.011317192s
Oct 19 15:33:43.805: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 26.009740815s
Oct 19 15:33:45.806: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 28.010422424s
Oct 19 15:33:47.806: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 30.010191229s
Oct 19 15:33:49.806: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 32.010698549s
Oct 19 15:33:51.805: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 34.009845908s
Oct 19 15:33:53.806: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 36.01055387s
Oct 19 15:33:55.807: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 38.011775724s
Oct 19 15:33:57.807: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 40.011524836s
Oct 19 15:33:59.806: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 42.010484337s
Oct 19 15:34:01.810: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 44.014221399s
Oct 19 15:34:03.806: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=true. Elapsed: 46.010322008s
Oct 19 15:34:05.808: INFO: Pod "client-a-8f5jg": Phase="Running", Reason="", readiness=false. Elapsed: 48.012172454s
Oct 19 15:34:07.807: INFO: Pod "client-a-8f5jg": Phase="Failed", Reason="", readiness=false. Elapsed: 50.011390669s
Oct 19 15:34:07.807: INFO: Pod "client-a-8f5jg" satisfied condition "completed"
Oct 19 15:34:07.807: INFO: Waiting for client-a-8f5jg to complete.
Oct 19 15:34:07.807: INFO: Waiting up to 5m0s for pod "client-a-8f5jg" in namespace "e2e-network-policy-397" to be "Succeeded or Failed"
Oct 19 15:34:07.812: INFO: Pod "client-a-8f5jg": Phase="Failed", Reason="", readiness=false. Elapsed: 5.013884ms
Oct 19 15:34:07.816: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-397 describe po client-a-8f5jg'
Oct 19 15:34:07.968: INFO: stderr: ""
Oct 19 15:34:07.968: INFO: stdout: "Name: client-a-8f5jg\nNamespace: e2e-network-policy-397\nPriority: 0\nService Account: default\nNode: worker01/192.168.200.31\nStart Time: Thu, 19 Oct 2023 15:33:17 +0000\nLabels: pod-name=client-a\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"portmap\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::356\",\n \"10.128.7.119\"\n ],\n \"mac\": \"2e:e6:ff:bc:21:3c\",\n \"default\": true,\n \"dns\": {},\n \"gateway\": [\n \"fd00::3f5\",\n \"10.128.7.75\"\n ]\n }]\nStatus: Failed\nIP: 10.128.7.119\nIPs:\n IP: 10.128.7.119\n IP: fd00::356\nContainers:\n client:\n Container ID: cri-o://1e36c832ecfb6329412a655eaf543561f249379227bdc731e7cfa19b58fab5c7\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: \n Host Port: \n Command:\n /bin/sh\n Args:\n -c\n for i in $(seq 1 5); do /agnhost connect 172.30.126.115:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1\n State: Terminated\n Reason: Error\n Exit Code: 1\n Started: Thu, 19 Oct 2023 15:33:18 +0000\n Finished: Thu, 19 Oct 2023 15:34:03 +0000\n Ready: False\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b6jxg (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-b6jxg:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-397/client-a-8f5jg to worker01\n Normal AddedInterface 49s multus Add eth0 [fd00::356/128 10.128.7.119/32] from portmap\n Normal Pulled 49s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 49s kubelet Created container client\n Normal Started 49s kubelet Started container client\n"
Oct 19 15:34:07.968: INFO:
Output of kubectl describe client-a-8f5jg:
Name: client-a-8f5jg
Namespace: e2e-network-policy-397
Priority: 0
Service Account: default
Node: worker01/192.168.200.31
Start Time: Thu, 19 Oct 2023 15:33:17 +0000
Labels: pod-name=client-a
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "portmap",
"interface": "eth0",
"ips": [
"fd00::356",
"10.128.7.119"
],
"mac": "2e:e6:ff:bc:21:3c",
"default": true,
"dns": {},
"gateway": [
"fd00::3f5",
"10.128.7.75"
]
}]
Status: Failed
IP: 10.128.7.119
IPs:
IP: 10.128.7.119
IP: fd00::356
Containers:
client:
Container ID: cri-o://1e36c832ecfb6329412a655eaf543561f249379227bdc731e7cfa19b58fab5c7
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port:
Host Port:
Command:
/bin/sh
Args:
-c
for i in $(seq 1 5); do /agnhost connect 172.30.126.115:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1
State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 19 Oct 2023 15:33:18 +0000
Finished: Thu, 19 Oct 2023 15:34:03 +0000
Ready: False
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b6jxg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-b6jxg:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional:
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-397/client-a-8f5jg to worker01
Normal AddedInterface 49s multus Add eth0 [fd00::356/128 10.128.7.119/32] from portmap
Normal Pulled 49s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 49s kubelet Created container client
Normal Started 49s kubelet Started container client
Oct 19 15:34:07.968: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-397 logs client-a-8f5jg --tail=100'
Oct 19 15:34:08.122: INFO: stderr: ""
Oct 19 15:34:08.122: INFO: stdout: "TIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\n"
Oct 19 15:34:08.122: INFO:
Last 100 log lines of client-a-8f5jg:
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
Oct 19 15:34:08.122: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-397 describe po pod-b-5xrgl'
Oct 19 15:34:08.271: INFO: stderr: ""
Oct 19 15:34:08.271: INFO: stdout: "Name: pod-b-5xrgl\nNamespace: e2e-network-policy-397\nPriority: 0\nService Account: default\nNode: worker02/192.168.200.32\nStart Time: Thu, 19 Oct 2023 15:32:21 +0000\nLabels: pod-name=pod-b\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"portmap\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::462\",\n \"10.128.8.193\"\n ],\n \"mac\": \"02:7f:38:13:18:57\",\n \"default\": true,\n \"dns\": {},\n \"gateway\": [\n \"fd00::415\",\n \"10.128.9.247\"\n ]\n }]\nStatus: Running\nIP: 10.128.8.193\nIPs:\n IP: 10.128.8.193\n IP: fd00::462\nContainers:\n pod-b-container-80:\n Container ID: cri-o://249a8f9a8d339927558693290c692eb48685d0f1e8092df5d2ffd1888609bc50\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Thu, 19 Oct 2023 15:32:22 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xrk6z (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-xrk6z:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 106s default-scheduler Successfully assigned e2e-network-policy-397/pod-b-5xrgl to worker02\n Normal AddedInterface 106s multus Add eth0 [fd00::462/128 10.128.8.193/32] from portmap\n Normal Pulled 106s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 106s kubelet Created container pod-b-container-80\n Normal Started 106s kubelet Started container pod-b-container-80\n"
Oct 19 15:34:08.271: INFO:
Output of kubectl describe pod-b-5xrgl:
Name: pod-b-5xrgl
Namespace: e2e-network-policy-397
Priority: 0
Service Account: default
Node: worker02/192.168.200.32
Start Time: Thu, 19 Oct 2023 15:32:21 +0000
Labels: pod-name=pod-b
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "portmap",
"interface": "eth0",
"ips": [
"fd00::462",
"10.128.8.193"
],
"mac": "02:7f:38:13:18:57",
"default": true,
"dns": {},
"gateway": [
"fd00::415",
"10.128.9.247"
]
}]
Status: Running
IP: 10.128.8.193
IPs:
IP: 10.128.8.193
IP: fd00::462
Containers:
pod-b-container-80:
Container ID: cri-o://249a8f9a8d339927558693290c692eb48685d0f1e8092df5d2ffd1888609bc50
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: 80/TCP
Host Port: 0/TCP
Args:
porter
State: Running
Started: Thu, 19 Oct 2023 15:32:22 +0000
Ready: True
Restart Count: 0
Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVE_PORT_80: foo
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xrk6z (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-xrk6z:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional:
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 106s default-scheduler Successfully assigned e2e-network-policy-397/pod-b-5xrgl to worker02
Normal AddedInterface 106s multus Add eth0 [fd00::462/128 10.128.8.193/32] from portmap
Normal Pulled 106s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 106s kubelet Created container pod-b-container-80
Normal Started 106s kubelet Started container pod-b-container-80
Oct 19 15:34:08.271: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-397 logs pod-b-5xrgl --tail=100'
Oct 19 15:34:08.422: INFO: stderr: ""
Oct 19 15:34:08.422: INFO: stdout: ""
Oct 19 15:34:08.422: INFO:
Last 100 log lines of pod-b-5xrgl:
Oct 19 15:34:08.423: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-397 describe po server-d9pg9'
Oct 19 15:34:08.576: INFO: stderr: ""
Oct 19 15:34:08.576: INFO: stdout: "Name: server-d9pg9\nNamespace: e2e-network-policy-397\nPriority: 0\nService Account: default\nNode: worker03/192.168.200.33\nStart Time: Thu, 19 Oct 2023 15:32:09 +0000\nLabels: pod-name=server\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"portmap\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::57c\",\n \"10.128.10.137\"\n ],\n \"mac\": \"26:90:d9:88:c4:c3\",\n \"default\": true,\n \"dns\": {},\n \"gateway\": [\n \"fd00::562\",\n \"10.128.10.226\"\n ]\n }]\nStatus: Running\nIP: 10.128.10.137\nIPs:\n IP: 10.128.10.137\n IP: fd00::57c\nContainers:\n server-container-80:\n Container ID: cri-o://3e7b1ab9a77190789fc4607e57a3407b1c663ef4f6e903451ca92b9c2d27d1b4\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Thu, 19 Oct 2023 15:32:10 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hv9j4 (ro)\n server-container-81:\n Container ID: cri-o://5ee48434882b979ca339d858b80239f3a027c13d13622203bcf9cf225f87b977\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 81/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Thu, 19 Oct 2023 15:32:10 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_81: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hv9j4 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-hv9j4:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 119s default-scheduler Successfully assigned e2e-network-policy-397/server-d9pg9 to worker03\n Normal AddedInterface 118s multus Add eth0 [fd00::57c/128 10.128.10.137/32] from portmap\n Normal Pulled 118s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 118s kubelet Created container server-container-80\n Normal Started 118s kubelet Started container server-container-80\n Normal Pulled 118s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 118s kubelet Created container server-container-81\n Normal Started 118s kubelet Started container server-container-81\n"
Oct 19 15:34:08.576: INFO:
Output of kubectl describe server-d9pg9:
Name: server-d9pg9
Namespace: e2e-network-policy-397
Priority: 0
Service Account: default
Node: worker03/192.168.200.33
Start Time: Thu, 19 Oct 2023 15:32:09 +0000
Labels: pod-name=server
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "portmap",
"interface": "eth0",
"ips": [
"fd00::57c",
"10.128.10.137"
],
"mac": "26:90:d9:88:c4:c3",
"default": true,
"dns": {},
"gateway": [
"fd00::562",
"10.128.10.226"
]
}]
Status: Running
IP: 10.128.10.137
IPs:
IP: 10.128.10.137
IP: fd00::57c
Containers:
server-container-80:
Container ID: cri-o://3e7b1ab9a77190789fc4607e57a3407b1c663ef4f6e903451ca92b9c2d27d1b4
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: 80/TCP
Host Port: 0/TCP
Args:
porter
State: Running
Started: Thu, 19 Oct 2023 15:32:10 +0000
Ready: True
Restart Count: 0
Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVE_PORT_80: foo
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hv9j4 (ro)
server-container-81:
Container ID: cri-o://5ee48434882b979ca339d858b80239f3a027c13d13622203bcf9cf225f87b977
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: 81/TCP
Host Port: 0/TCP
Args:
porter
State: Running
Started: Thu, 19 Oct 2023 15:32:10 +0000
Ready: True
Restart Count: 0
Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVE_PORT_81: foo
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hv9j4 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-hv9j4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional:
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 119s default-scheduler Successfully assigned e2e-network-policy-397/server-d9pg9 to worker03
Normal AddedInterface 118s multus Add eth0 [fd00::57c/128 10.128.10.137/32] from portmap
Normal Pulled 118s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 118s kubelet Created container server-container-80
Normal Started 118s kubelet Started container server-container-80
Normal Pulled 118s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 118s kubelet Created container server-container-81
Normal Started 118s kubelet Started container server-container-81
Oct 19 15:34:08.576: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-397 logs server-d9pg9 --tail=100'
Oct 19 15:34:08.719: INFO: stderr: "Defaulted container \"server-container-80\" out of: server-container-80, server-container-81\n"
Oct 19 15:34:08.719: INFO: stdout: ""
Oct 19 15:34:08.719: INFO:
Last 100 log lines of server-d9pg9:
Oct 19 15:34:08.739: FAIL: Pod client-a-8f5jg should be able to connect to service svc-server, but was not able to connect.
Pod logs:
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
Current NetworkPolicies:
[{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-397 610feef7-759b-4140-a57c-ef3ccb6b2ccd 145688 1 2023-10-19 15:32:27 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-10-19 15:32:27 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.10.137/32,Except:[],}}]}] [Egress]} {[]}}]
Pods:
[Pod: client-a-8f5jg, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:33:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:34:04 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:34:04 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:33:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.7.119,StartTime:2023-10-19 15:33:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-10-19 15:33:18 +0000 UTC,FinishedAt:2023-10-19 15:34:03 +0000 UTC,ContainerID:cri-o://1e36c832ecfb6329412a655eaf543561f249379227bdc731e7cfa19b58fab5c7,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://1e36c832ecfb6329412a655eaf543561f249379227bdc731e7cfa19b58fab5c7,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.7.119,},PodIP{IP:fd00::356,},},EphemeralContainerStatuses:[]ContainerStatus{},}
Pod: pod-b-5xrgl, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.8.193,StartTime:2023-10-19 15:32:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pod-b-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-19 15:32:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://249a8f9a8d339927558693290c692eb48685d0f1e8092df5d2ffd1888609bc50,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.193,},PodIP{IP:fd00::462,},},EphemeralContainerStatuses:[]ContainerStatus{},}
Pod: server-d9pg9, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.10.137,StartTime:2023-10-19 15:32:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-19 15:32:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://3e7b1ab9a77190789fc4607e57a3407b1c663ef4f6e903451ca92b9c2d27d1b4,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-19 15:32:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://5ee48434882b979ca339d858b80239f3a027c13d13622203bcf9cf225f87b977,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.10.137,},PodIP{IP:fd00::57c,},},EphemeralContainerStatuses:[]ContainerStatus{},}
]
Full Stack Trace
k8s.io/kubernetes/test/e2e/network/netpol.checkConnectivity(0xc0016d03c0, 0xc00219c840, 0xc006279200, 0xc005a57400)
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941 +0x355
k8s.io/kubernetes/test/e2e/network/netpol.testCanConnectProtocol(0xc0016d03c0, 0xc00219c840, {0x8a3ce56, 0x8}, 0xc005a57400, 0xc001c83e50?, {0x8a2e80e, 0x3})
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1913 +0x1be
k8s.io/kubernetes/test/e2e/network/netpol.testCanConnect(...)
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1897
k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.27.4()
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1410 +0x47
github.com/onsi/ginkgo/v2.By({0x8c0a295, 0x3d}, {0xc006207e50, 0x1, 0x0?})
github.com/onsi/ginkgo/v2@v2.4.0/core_dsl.go:535 +0x525
k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.27()
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1409 +0x8fc
github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2e8fa1e, 0xc00032c600})
github.com/onsi/ginkgo/v2@v2.4.0/internal/node.go:449 +0x1b
github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2()
github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:757 +0x98
created by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:745 +0xe3d
STEP: Cleaning up the pod client-a-8f5jg 10/19/23 15:34:08.739
STEP: Cleaning up the policy. 10/19/23 15:34:08.754
STEP: Cleaning up the server. 10/19/23 15:34:08.762
STEP: Cleaning up the server's service. 10/19/23 15:34:08.775
[AfterEach] NetworkPolicy between server and client
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:96
STEP: Cleaning up the server. 10/19/23 15:34:08.819
STEP: Cleaning up the server's service. 10/19/23 15:34:08.832
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:33
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
dump namespaces | framework.go:196
STEP: dump namespace information after failure 10/19/23 15:34:08.875
STEP: Collecting events from namespace "e2e-network-policy-397". 10/19/23 15:34:08.875
STEP: Found 41 events. 10/19/23 15:34:08.885
Oct 19 15:34:08.885: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-47d7l: { } Scheduled: Successfully assigned e2e-network-policy-397/client-a-47d7l to worker02
Oct 19 15:34:08.885: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-8f5jg: { } Scheduled: Successfully assigned e2e-network-policy-397/client-a-8f5jg to worker01
Oct 19 15:34:08.885: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-jppc4: { } Scheduled: Successfully assigned e2e-network-policy-397/client-a-jppc4 to worker02
Oct 19 15:34:08.885: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-z75l9: { } Scheduled: Successfully assigned e2e-network-policy-397/client-can-connect-80-z75l9 to worker03
Oct 19 15:34:08.885: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-p7k82: { } Scheduled: Successfully assigned e2e-network-policy-397/client-can-connect-81-p7k82 to worker03
Oct 19 15:34:08.885: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-b-5xrgl: { } Scheduled: Successfully assigned e2e-network-policy-397/pod-b-5xrgl to worker02
Oct 19 15:34:08.885: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-d9pg9: { } Scheduled: Successfully assigned e2e-network-policy-397/server-d9pg9 to worker03
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:10 +0000 UTC - event for server-d9pg9: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:10 +0000 UTC - event for server-d9pg9: {multus } AddedInterface: Add eth0 [fd00::57c/128 10.128.10.137/32] from portmap
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:10 +0000 UTC - event for server-d9pg9: {kubelet worker03} Created: Created container server-container-81
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:10 +0000 UTC - event for server-d9pg9: {kubelet worker03} Started: Started container server-container-81
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:10 +0000 UTC - event for server-d9pg9: {kubelet worker03} Started: Started container server-container-80
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:10 +0000 UTC - event for server-d9pg9: {kubelet worker03} Created: Created container server-container-80
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:10 +0000 UTC - event for server-d9pg9: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:14 +0000 UTC - event for client-can-connect-80-z75l9: {multus } AddedInterface: Add eth0 [fd00::56d/128 10.128.11.92/32] from portmap
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:14 +0000 UTC - event for client-can-connect-80-z75l9: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:14 +0000 UTC - event for client-can-connect-80-z75l9: {kubelet worker03} Created: Created container client
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:14 +0000 UTC - event for client-can-connect-80-z75l9: {kubelet worker03} Started: Started container client
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:18 +0000 UTC - event for client-can-connect-81-p7k82: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:18 +0000 UTC - event for client-can-connect-81-p7k82: {multus } AddedInterface: Add eth0 [fd00::5e4/128 10.128.10.186/32] from portmap
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:18 +0000 UTC - event for client-can-connect-81-p7k82: {kubelet worker03} Created: Created container client
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:18 +0000 UTC - event for client-can-connect-81-p7k82: {kubelet worker03} Started: Started container client
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:22 +0000 UTC - event for pod-b-5xrgl: {multus } AddedInterface: Add eth0 [fd00::462/128 10.128.8.193/32] from portmap
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:22 +0000 UTC - event for pod-b-5xrgl: {kubelet worker02} Started: Started container pod-b-container-80
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:22 +0000 UTC - event for pod-b-5xrgl: {kubelet worker02} Created: Created container pod-b-container-80
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:22 +0000 UTC - event for pod-b-5xrgl: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:24 +0000 UTC - event for client-a-jppc4: {kubelet worker02} Started: Started container client
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:24 +0000 UTC - event for client-a-jppc4: {kubelet worker02} Created: Created container client
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:24 +0000 UTC - event for client-a-jppc4: {multus } AddedInterface: Add eth0 [fd00::474/128 10.128.9.66/32] from portmap
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:24 +0000 UTC - event for client-a-jppc4: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:28 +0000 UTC - event for client-a-47d7l: {kubelet worker02} Created: Created container client
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:28 +0000 UTC - event for client-a-47d7l: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:28 +0000 UTC - event for client-a-47d7l: {kubelet worker02} Started: Started container client
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:32:28 +0000 UTC - event for client-a-47d7l: {multus } AddedInterface: Add eth0 [fd00::496/128 10.128.9.6/32] from portmap
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:33:18 +0000 UTC - event for client-a-8f5jg: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:33:18 +0000 UTC - event for client-a-8f5jg: {kubelet worker01} Started: Started container client
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:33:18 +0000 UTC - event for client-a-8f5jg: {kubelet worker01} Created: Created container client
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:33:18 +0000 UTC - event for client-a-8f5jg: {multus } AddedInterface: Add eth0 [fd00::356/128 10.128.7.119/32] from portmap
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:34:08 +0000 UTC - event for pod-b-5xrgl: {kubelet worker02} Killing: Stopping container pod-b-container-80
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:34:08 +0000 UTC - event for server-d9pg9: {kubelet worker03} Killing: Stopping container server-container-80
Oct 19 15:34:08.885: INFO: At 2023-10-19 15:34:08 +0000 UTC - event for server-d9pg9: {kubelet worker03} Killing: Stopping container server-container-81
Oct 19 15:34:08.890: INFO: POD NODE PHASE GRACE CONDITIONS
Oct 19 15:34:08.890: INFO: pod-b-5xrgl worker02 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-19 15:32:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-10-19 15:32:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-10-19 15:32:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-19 15:32:21 +0000 UTC }]
Oct 19 15:34:08.890: INFO: server-d9pg9 worker03 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-10-19 15:32:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-10-19 15:32:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-10-19 15:32:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-10-19 15:32:09 +0000 UTC }]
Oct 19 15:34:08.890: INFO:
Oct 19 15:34:08.897: INFO: skipping dumping cluster info - cluster too large
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
tear down framework | framework.go:193
STEP: Destroying namespace "e2e-network-policy-397" for this suite. 10/19/23 15:34:08.898
fail [k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941]: Oct 19 15:34:08.739: Pod client-a-8f5jg should be able to connect to service svc-server, but was not able to connect.
Pod logs:
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
Current NetworkPolicies:
[{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-397 610feef7-759b-4140-a57c-ef3ccb6b2ccd 145688 1 2023-10-19 15:32:27 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-10-19 15:32:27 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.10.137/32,Except:[],}}]}] [Egress]} {[]}}]
Pods:
[Pod: client-a-8f5jg, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:33:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:34:04 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:34:04 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:33:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.7.119,StartTime:2023-10-19 15:33:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-10-19 15:33:18 +0000 UTC,FinishedAt:2023-10-19 15:34:03 +0000 UTC,ContainerID:cri-o://1e36c832ecfb6329412a655eaf543561f249379227bdc731e7cfa19b58fab5c7,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://1e36c832ecfb6329412a655eaf543561f249379227bdc731e7cfa19b58fab5c7,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.7.119,},PodIP{IP:fd00::356,},},EphemeralContainerStatuses:[]ContainerStatus{},}
Pod: pod-b-5xrgl, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.8.193,StartTime:2023-10-19 15:32:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pod-b-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-19 15:32:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://249a8f9a8d339927558693290c692eb48685d0f1e8092df5d2ffd1888609bc50,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.193,},PodIP{IP:fd00::462,},},EphemeralContainerStatuses:[]ContainerStatus{},}
Pod: server-d9pg9, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-10-19 15:32:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.10.137,StartTime:2023-10-19 15:32:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-19 15:32:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://3e7b1ab9a77190789fc4607e57a3407b1c663ef4f6e903451ca92b9c2d27d1b4,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-10-19 15:32:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://5ee48434882b979ca339d858b80239f3a027c13d13622203bcf9cf225f87b977,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.10.137,},PodIP{IP:fd00::57c,},},EphemeralContainerStatuses:[]ContainerStatus{},}
]
Ginkgo exit error 1: exit with code 1
failed: (2m1s) 2023-10-19T15:34:08 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 2/57/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m16s) 2023-10-19T15:34:13 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 2/58/67 "[sig-network] Services should be able to create a functioning NodePort service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (16.6s) 2023-10-19T15:34:15 "[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] [Serial:Self] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/59/67 "[sig-network] DNS should provide DNS for pods for Subdomain [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (18.6s) 2023-10-19T15:34:19 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/60/67 "[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1.2s) 2023-10-19T15:34:20 "[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/61/67 "[sig-network] Services should find a service from listing all namespaces [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (8.3s) 2023-10-19T15:34:21 "[sig-network] Services should be able to create a functioning NodePort service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/62/67 "[sig-network] Services should serve a basic endpoint from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1.2s) 2023-10-19T15:34:21 "[sig-network] Services should find a service from listing all namespaces [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/63/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (8.3s) 2023-10-19T15:34:24 "[sig-network] DNS should provide DNS for pods for Subdomain [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/64/67 "[sig-network] Services should provide secure master service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1m1s) 2023-10-19T15:34:25 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 2/65/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1.2s) 2023-10-19T15:34:25 "[sig-network] Services should provide secure master service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/66/67 "[sig-network] Services should delete a collection of services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1.5s) 2023-10-19T15:34:26 "[sig-network] Services should delete a collection of services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (19.5s) 2023-10-19T15:34:28 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (14s) 2023-10-19T15:34:35 "[sig-network] Services should serve a basic endpoint from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (2m0s) 2023-10-19T15:34:50 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m10s) 2023-10-19T15:35:01 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (2m0s) 2023-10-19T15:35:03 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m4s) 2023-10-19T15:35:25 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m22s) 2023-10-19T15:35:26 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (3m44s) 2023-10-19T15:36:14 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (3m4s) 2023-10-19T15:37:29 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 2/67/67 "[sig-network] Service endpoints latency should not be very high [Conformance] [Serial] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]"
passed: (10.4s) 2023-10-19T15:37:39 "[sig-network] Service endpoints latency should not be very high [Conformance] [Serial] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]"
Failing tests:
[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]
error: 2 fail, 65 pass, 0 skip (7m29s)```
Using Jarno's PR and the setup he deployed we could observe the two expected failures, all other tests passed:
results.txt
``` started: 0/1/67 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/2/67 "[sig-network] DNS should provide DNS for services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/3/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/4/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/5/67 "[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/6/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/7/67 "[sig-network] DNS should provide DNS for pods for Hostname [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/8/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/9/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/10/67 "[sig-network] DNS should support configurable pod DNS nameservers [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (21.5s) 2023-10-19T15:30:32 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/11/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (22.1s) 2023-10-19T15:30:33 "[sig-network] DNS should support configurable pod DNS nameservers [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/12/67 "[sig-network] Services should complete a service status lifecycle [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.4s) 2023-10-19T15:30:34 "[sig-network] Services should complete a service status lifecycle [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/13/67 "[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (32.8s) 2023-10-19T15:30:43 "[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/14/67 "[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.3s) 2023-10-19T15:30:45 "[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/15/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (34.2s) 2023-10-19T15:30:45 "[sig-network] DNS should provide DNS for pods for Hostname [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/16/67 "[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (39.7s) 2023-10-19T15:30:50 "[sig-network] DNS should provide DNS for services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/17/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (36.5s) 2023-10-19T15:31:21 "[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/18/67 "[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (50s) 2023-10-19T15:31:24 "[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/19/67 "[sig-network] DNS should provide DNS for ExternalName services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (8.3s) 2023-10-19T15:31:30 "[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/20/67 "[sig-network] NetworkPolicy API should support creating NetworkPolicy API operations [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1.3s) 2023-10-19T15:31:31 "[sig-network] NetworkPolicy API should support creating NetworkPolicy API operations [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/21/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m29s) 2023-10-19T15:31:39 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/22/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (18.5s) 2023-10-19T15:31:43 "[sig-network] DNS should provide DNS for ExternalName services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/23/67 "[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m35s) 2023-10-19T15:31:45 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/24/67 "[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (8.4s) 2023-10-19T15:31:51 "[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/25/67 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (8.9s) 2023-10-19T15:31:54 "[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/26/67 "[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (8.5s) 2023-10-19T15:32:03 "[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/27/67 "[sig-network] DNS should provide DNS for the cluster [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (15.6s) 2023-10-19T15:32:07 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/28/67 "[sig-network] Services should test the lifecycle of an Endpoint [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.1s) 2023-10-19T15:32:08 "[sig-network] Services should test the lifecycle of an Endpoint [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/29/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (5.2s) 2023-10-19T15:32:08 "[sig-network] DNS should provide DNS for the cluster [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/30/67 "[sig-network] Services should serve multiport endpoints from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (11.5s) 2023-10-19T15:32:19 "[sig-network] Services should serve multiport endpoints from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/31/67 "[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (2m15s) 2023-10-19T15:32:25 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/32/67 "[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (11.1s) 2023-10-19T15:32:30 "[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/33/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (7.8s) 2023-10-19T15:32:33 "[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/34/67 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (2m25s) 2023-10-19T15:32:35 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/35/67 "[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (3.3s) 2023-10-19T15:32:37 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/36/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m6s) 2023-10-19T15:32:37 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/37/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m29s) 2023-10-19T15:32:40 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/38/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (6s) 2023-10-19T15:32:41 "[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/39/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m14s) 2023-10-19T15:32:46 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/40/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m8s) 2023-10-19T15:32:47 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/41/67 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" Oct 19 15:30:50.972: INFO: Enabling in-tree volume drivers [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/test.go:58 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] set up framework | framework.go:178 STEP: Creating a kubernetes client 10/19/23 15:30:51.782 STEP: Building a namespace api object, basename network-policy 10/19/23 15:30:51.783 Oct 19 15:30:51.870: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace 10/19/23 15:30:52.027 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 10/19/23 15:30:52.031 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:72 [BeforeEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:78 STEP: Creating a simple server that serves on port 80 and 81. 10/19/23 15:30:52.036 STEP: Creating a server pod server in namespace e2e-network-policy-9251 10/19/23 15:30:52.036 W1019 15:30:52.065206 1003 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "server-container-80", "server-container-81" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "server-container-80", "server-container-81" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "server-container-80", "server-container-81" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "server-container-80", "server-container-81" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Oct 19 15:30:52.065: INFO: Created pod server-jvr65 STEP: Creating a service svc-server for pod server in namespace e2e-network-policy-9251 10/19/23 15:30:52.065 Oct 19 15:30:52.095: INFO: Created service svc-server STEP: Waiting for pod ready 10/19/23 15:30:52.095 Oct 19 15:30:52.095: INFO: Waiting up to 5m0s for pod "server-jvr65" in namespace "e2e-network-policy-9251" to be "running and ready" Oct 19 15:30:52.106: INFO: Pod "server-jvr65": Phase="Pending", Reason="", readiness=false. Elapsed: 10.36297ms Oct 19 15:30:52.106: INFO: The phase of Pod server-jvr65 is Pending, waiting for it to be Running (with Ready = true) Oct 19 15:30:54.111: INFO: Pod "server-jvr65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015326203s Oct 19 15:30:54.111: INFO: The phase of Pod server-jvr65 is Pending, waiting for it to be Running (with Ready = true) Oct 19 15:30:56.111: INFO: Pod "server-jvr65": Phase="Running", Reason="", readiness=true. Elapsed: 4.015578759s Oct 19 15:30:56.111: INFO: The phase of Pod server-jvr65 is Running (Ready = true) Oct 19 15:30:56.111: INFO: Pod "server-jvr65" satisfied condition "running and ready" STEP: Testing pods can connect to both ports when no policy is present. 10/19/23 15:30:56.111 STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. 10/19/23 15:30:56.111 W1019 15:30:56.237329 1003 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Oct 19 15:30:56.237: INFO: Waiting for client-can-connect-80-gxfqz to complete. Oct 19 15:30:56.237: INFO: Waiting up to 3m0s for pod "client-can-connect-80-gxfqz" in namespace "e2e-network-policy-9251" to be "completed" Oct 19 15:30:56.438: INFO: Pod "client-can-connect-80-gxfqz": Phase="Pending", Reason="", readiness=false. Elapsed: 201.411818ms Oct 19 15:30:58.444: INFO: Pod "client-can-connect-80-gxfqz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207461315s Oct 19 15:31:00.446: INFO: Pod "client-can-connect-80-gxfqz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209467723s Oct 19 15:31:02.446: INFO: Pod "client-can-connect-80-gxfqz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209074138s Oct 19 15:31:02.446: INFO: Pod "client-can-connect-80-gxfqz" satisfied condition "completed" Oct 19 15:31:02.446: INFO: Waiting for client-can-connect-80-gxfqz to complete. Oct 19 15:31:02.446: INFO: Waiting up to 5m0s for pod "client-can-connect-80-gxfqz" in namespace "e2e-network-policy-9251" to be "Succeeded or Failed" Oct 19 15:31:02.453: INFO: Pod "client-can-connect-80-gxfqz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.93149ms STEP: Saw pod success 10/19/23 15:31:02.453 Oct 19 15:31:02.453: INFO: Pod "client-can-connect-80-gxfqz" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-80-gxfqz 10/19/23 15:31:02.453 STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. 10/19/23 15:31:02.469 W1019 15:31:02.477504 1003 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Oct 19 15:31:02.477: INFO: Waiting for client-can-connect-81-86hs8 to complete. Oct 19 15:31:02.477: INFO: Waiting up to 3m0s for pod "client-can-connect-81-86hs8" in namespace "e2e-network-policy-9251" to be "completed" Oct 19 15:31:02.482: INFO: Pod "client-can-connect-81-86hs8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.432086ms Oct 19 15:31:04.488: INFO: Pod "client-can-connect-81-86hs8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010707917s Oct 19 15:31:06.488: INFO: Pod "client-can-connect-81-86hs8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010430141s Oct 19 15:31:06.488: INFO: Pod "client-can-connect-81-86hs8" satisfied condition "completed" Oct 19 15:31:06.488: INFO: Waiting for client-can-connect-81-86hs8 to complete. Oct 19 15:31:06.488: INFO: Waiting up to 5m0s for pod "client-can-connect-81-86hs8" in namespace "e2e-network-policy-9251" to be "Succeeded or Failed" Oct 19 15:31:06.493: INFO: Pod "client-can-connect-81-86hs8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.96631ms STEP: Saw pod success 10/19/23 15:31:06.493 Oct 19 15:31:06.493: INFO: Pod "client-can-connect-81-86hs8" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-81-86hs8 10/19/23 15:31:06.493 [It] should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1477 STEP: Creating client-a which should not be able to contact the server. 10/19/23 15:31:06.524 STEP: Creating client pod client-a that should not be able to connect to svc-server. 10/19/23 15:31:06.524 W1019 15:31:06.533687 1003 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Oct 19 15:31:06.533: INFO: Waiting for client-a-2krxp to complete. Oct 19 15:31:06.533: INFO: Waiting up to 5m0s for pod "client-a-2krxp" in namespace "e2e-network-policy-9251" to be "Succeeded or Failed" Oct 19 15:31:06.538: INFO: Pod "client-a-2krxp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.868111ms Oct 19 15:31:08.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 2.010977012s Oct 19 15:31:10.546: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 4.012221183s Oct 19 15:31:12.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 6.01032889s Oct 19 15:31:14.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 8.010548099s Oct 19 15:31:16.543: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 10.010059935s Oct 19 15:31:18.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 12.010629186s Oct 19 15:31:20.726: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 14.192780641s Oct 19 15:31:22.548: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 16.014559411s Oct 19 15:31:24.543: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 18.009944112s Oct 19 15:31:26.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 20.010641123s Oct 19 15:31:28.545: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 22.011447235s Oct 19 15:31:30.545: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 24.012089791s Oct 19 15:31:32.549: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 26.015487785s Oct 19 15:31:34.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 28.010508057s Oct 19 15:31:36.546: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 30.01228255s Oct 19 15:31:38.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 32.010414791s Oct 19 15:31:40.545: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 34.01182312s Oct 19 15:31:42.548: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 36.014894484s Oct 19 15:31:44.545: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 38.011381785s Oct 19 15:31:46.549: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 40.015748477s Oct 19 15:31:48.545: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 42.01176868s Oct 19 15:31:50.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 44.011173169s Oct 19 15:31:52.548: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=true. Elapsed: 46.015026207s Oct 19 15:31:54.544: INFO: Pod "client-a-2krxp": Phase="Running", Reason="", readiness=false. Elapsed: 48.010917573s Oct 19 15:31:56.543: INFO: Pod "client-a-2krxp": Phase="Failed", Reason="", readiness=false. Elapsed: 50.009832587s STEP: Cleaning up the pod client-a-2krxp 10/19/23 15:31:56.543 STEP: Creating client-a which should now be able to contact the server. 10/19/23 15:31:56.577 STEP: Creating client pod client-a that should successfully connect to svc-server. 10/19/23 15:31:56.577 W1019 15:31:56.590709 1003 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Oct 19 15:31:56.590: INFO: Waiting for client-a-bn9h5 to complete. Oct 19 15:31:56.590: INFO: Waiting up to 3m0s for pod "client-a-bn9h5" in namespace "e2e-network-policy-9251" to be "completed" Oct 19 15:31:56.596: INFO: Pod "client-a-bn9h5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.401813ms Oct 19 15:31:58.601: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 2.010880863s Oct 19 15:32:00.603: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 4.012742406s Oct 19 15:32:02.603: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 6.012510975s Oct 19 15:32:04.603: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 8.012453461s Oct 19 15:32:06.601: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 10.010864297s Oct 19 15:32:08.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 12.011208398s Oct 19 15:32:10.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 14.011648218s Oct 19 15:32:12.605: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 16.014618848s Oct 19 15:32:14.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 18.011482845s Oct 19 15:32:16.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 20.011476571s Oct 19 15:32:18.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 22.011281889s Oct 19 15:32:20.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 24.011593907s Oct 19 15:32:22.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 26.011161648s Oct 19 15:32:24.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 28.011463763s Oct 19 15:32:26.600: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 30.009771048s Oct 19 15:32:28.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 32.011942956s Oct 19 15:32:30.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 34.011450174s Oct 19 15:32:32.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 36.011454271s Oct 19 15:32:34.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 38.0119004s Oct 19 15:32:36.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 40.011646434s Oct 19 15:32:38.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 42.01163537s Oct 19 15:32:40.602: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 44.011685131s Oct 19 15:32:42.601: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=true. Elapsed: 46.01106771s Oct 19 15:32:44.604: INFO: Pod "client-a-bn9h5": Phase="Running", Reason="", readiness=false. Elapsed: 48.013484903s Oct 19 15:32:46.609: INFO: Pod "client-a-bn9h5": Phase="Failed", Reason="", readiness=false. Elapsed: 50.018335382s Oct 19 15:32:46.609: INFO: Pod "client-a-bn9h5" satisfied condition "completed" Oct 19 15:32:46.609: INFO: Waiting for client-a-bn9h5 to complete. Oct 19 15:32:46.609: INFO: Waiting up to 5m0s for pod "client-a-bn9h5" in namespace "e2e-network-policy-9251" to be "Succeeded or Failed" Oct 19 15:32:46.615: INFO: Pod "client-a-bn9h5": Phase="Failed", Reason="", readiness=false. Elapsed: 6.235765ms Oct 19 15:32:46.620: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-9251 describe po client-a-bn9h5' Oct 19 15:32:46.805: INFO: stderr: "" Oct 19 15:32:46.805: INFO: stdout: "Name: client-a-bn9h5\nNamespace: e2e-network-policy-9251\nPriority: 0\nService Account: default\nNode: worker03/192.168.200.33\nStart Time: Thu, 19 Oct 2023 15:31:56 +0000\nLabels: pod-name=client-a\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"portmap\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::50a\",\n \"10.128.10.171\"\n ],\n \"mac\": \"0a:44:4b:68:32:63\",\n \"default\": true,\n \"dns\": {},\n \"gateway\": [\n \"fd00::562\",\n \"10.128.10.226\"\n ]\n }]\nStatus: Failed\nIP: 10.128.10.171\nIPs:\n IP: 10.128.10.171\n IP: fd00::50a\nContainers:\n client:\n Container ID: cri-o://0f1612eeff4d6c474b6587dd8090733d0ba68204ccdab4c87da3a9a5c1101971\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: