isovalent / olm-for-cilium

OpenShift Operator Lifecycle Manager for Cilium
Other
6 stars 5 forks source link

Add Cilium v1.13.4 #9

Closed qmonnet closed 1 year ago

qmonnet commented 1 year ago

Generated with scripts/add-release.sh $RELEASE, following the steps at https://github.com/isovalent/cilium-ee-olm/issues/116.

Cc @michi-covalent

qmonnet commented 1 year ago

Expected failures are failing. The rest is passing. All good, all good.

Failing tests:

[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]
results.txt ``` started: 0/1/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/2/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/3/67 "[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/4/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/5/67 "[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/6/67 "[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/7/67 "[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/8/67 "[sig-network] DNS should provide DNS for pods for Subdomain [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/9/67 "[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/10/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2.5s) 2023-06-19T13:25:32 "[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/11/67 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (25.5s) 2023-06-19T13:25:55 "[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/12/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (27.3s) 2023-06-19T13:25:57 "[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/13/67 "[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.3s) 2023-06-19T13:25:58 "[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/14/67 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (32.4s) 2023-06-19T13:26:02 "[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/15/67 "[sig-network] Services should provide secure master service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (2.1s) 2023-06-19T13:26:04 "[sig-network] Services should provide secure master service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/16/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (39.3s) 2023-06-19T13:26:11 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/17/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (41.9s) 2023-06-19T13:26:11 "[sig-network] DNS should provide DNS for pods for Subdomain [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/18/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (43.1s) 2023-06-19T13:26:12 "[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/19/67 "[sig-network] Ingress API should support creating Ingress API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.7s) 2023-06-19T13:26:14 "[sig-network] Ingress API should support creating Ingress API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/20/67 "[sig-network] DNS should provide DNS for pods for Hostname [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (5.4s) 2023-06-19T13:26:20 "[sig-network] DNS should provide DNS for pods for Hostname [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/21/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (27.9s) 2023-06-19T13:26:26 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/22/67 "[sig-network] Services should be able to create a functioning NodePort service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (8.3s) 2023-06-19T13:26:34 "[sig-network] Services should be able to create a functioning NodePort service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/23/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m32s) 2023-06-19T13:27:01 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/24/67 "[sig-network] Services should find a service from listing all namespaces [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.4s) 2023-06-19T13:27:03 "[sig-network] Services should find a service from listing all namespaces [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/25/67 "[sig-network] Services should serve a basic endpoint from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m16s) 2023-06-19T13:27:11 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/26/67 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m44s) 2023-06-19T13:27:13 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/27/67 "[sig-network] Services should serve multiport endpoints from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (3.5s) 2023-06-19T13:27:15 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/28/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (14.1s) 2023-06-19T13:27:17 "[sig-network] Services should serve a basic endpoint from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/29/67 "[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m16s) 2023-06-19T13:27:20 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/30/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m9s) 2023-06-19T13:27:20 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/31/67 "[sig-network] DNS should provide DNS for the cluster [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (3.6s) 2023-06-19T13:27:24 "[sig-network] DNS should provide DNS for the cluster [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/32/67 "[sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (10.7s) 2023-06-19T13:27:24 "[sig-network] Services should serve multiport endpoints from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/33/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (7.4s) 2023-06-19T13:27:24 "[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/34/67 "[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (5.5s) 2023-06-19T13:27:29 "[sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/35/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m20s) 2023-06-19T13:27:31 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/36/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (9.8s) 2023-06-19T13:27:34 "[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/37/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m17s) 2023-06-19T13:27:46 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/38/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m18s) 2023-06-19T13:27:47 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/39/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (23.8s) 2023-06-19T13:27:53 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/40/67 "[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (21.9s) 2023-06-19T13:27:53 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/41/67 "[sig-network] Services should complete a service status lifecycle [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.6s) 2023-06-19T13:27:55 "[sig-network] Services should complete a service status lifecycle [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/42/67 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (11.2s) 2023-06-19T13:28:04 "[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/43/67 "[sig-network] Services should test the lifecycle of an Endpoint [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.3s) 2023-06-19T13:28:05 "[sig-network] Services should test the lifecycle of an Endpoint [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/44/67 "[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.4s) 2023-06-19T13:28:07 "[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/45/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (25.7s) 2023-06-19T13:28:20 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/46/67 "[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (2m6s) 2023-06-19T13:28:25 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/47/67 "[sig-network] Services should delete a collection of services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.4s) 2023-06-19T13:28:27 "[sig-network] Services should delete a collection of services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/48/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m4s) 2023-06-19T13:28:28 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/49/67 "[sig-network] DNS should support configurable pod DNS nameservers [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m10s) 2023-06-19T13:28:30 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/50/67 "[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] [Serial:Self] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (10.5s) 2023-06-19T13:28:31 "[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/51/67 "[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (3.7s) 2023-06-19T13:28:32 "[sig-network] DNS should support configurable pod DNS nameservers [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/52/67 "[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (8.1s) 2023-06-19T13:28:39 "[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/53/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m6s) 2023-06-19T13:28:40 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/54/67 "[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (14.8s) 2023-06-19T13:28:45 "[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] [Serial:Self] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/55/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m8s) 2023-06-19T13:28:54 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/56/67 "[sig-network] NetworkPolicy API should support creating NetworkPolicy API operations [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1.5s) 2023-06-19T13:28:55 "[sig-network] NetworkPolicy API should support creating NetworkPolicy API operations [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/57/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m10s) 2023-06-19T13:28:57 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 0/58/67 "[sig-network] IngressClass API should support creating IngressClass API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1.4s) 2023-06-19T13:28:59 "[sig-network] IngressClass API should support creating IngressClass API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/59/67 "[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (11s) 2023-06-19T13:29:10 "[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/60/67 "[sig-network] DNS should provide DNS for services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (42.8s) 2023-06-19T13:29:14 "[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/61/67 "[sig-network] DNS should provide DNS for ExternalName services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (36.6s) 2023-06-19T13:29:17 "[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 0/62/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" Jun 19 13:27:34.577: INFO: Enabling in-tree volume drivers [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/test.go:58 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] set up framework | framework.go:178 STEP: Creating a kubernetes client 06/19/23 13:27:35.411 STEP: Building a namespace api object, basename network-policy 06/19/23 13:27:35.412 Jun 19 13:27:35.467: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace 06/19/23 13:27:35.67 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/19/23 13:27:35.676 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:72 [BeforeEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:78 STEP: Creating a simple server that serves on port 80 and 81. 06/19/23 13:27:35.681 STEP: Creating a server pod server in namespace e2e-network-policy-4773 06/19/23 13:27:35.681 W0619 13:27:35.701363 2229 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "server-container-80", "server-container-81" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "server-container-80", "server-container-81" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "server-container-80", "server-container-81" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "server-container-80", "server-container-81" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 13:27:35.701: INFO: Created pod server-ds8ng STEP: Creating a service svc-server for pod server in namespace e2e-network-policy-4773 06/19/23 13:27:35.701 Jun 19 13:27:35.783: INFO: Created service svc-server STEP: Waiting for pod ready 06/19/23 13:27:35.783 Jun 19 13:27:35.783: INFO: Waiting up to 5m0s for pod "server-ds8ng" in namespace "e2e-network-policy-4773" to be "running and ready" Jun 19 13:27:35.796: INFO: Pod "server-ds8ng": Phase="Pending", Reason="", readiness=false. Elapsed: 12.851932ms Jun 19 13:27:35.796: INFO: The phase of Pod server-ds8ng is Pending, waiting for it to be Running (with Ready = true) Jun 19 13:27:37.808: INFO: Pod "server-ds8ng": Phase="Running", Reason="", readiness=true. Elapsed: 2.025134219s Jun 19 13:27:37.808: INFO: The phase of Pod server-ds8ng is Running (Ready = true) Jun 19 13:27:37.808: INFO: Pod "server-ds8ng" satisfied condition "running and ready" STEP: Testing pods can connect to both ports when no policy is present. 06/19/23 13:27:37.808 STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. 06/19/23 13:27:37.808 W0619 13:27:37.825374 2229 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 13:27:37.825: INFO: Waiting for client-can-connect-80-4g9vr to complete. Jun 19 13:27:37.825: INFO: Waiting up to 3m0s for pod "client-can-connect-80-4g9vr" in namespace "e2e-network-policy-4773" to be "completed" Jun 19 13:27:37.830: INFO: Pod "client-can-connect-80-4g9vr": Phase="Pending", Reason="", readiness=false. Elapsed: 5.440128ms Jun 19 13:27:39.838: INFO: Pod "client-can-connect-80-4g9vr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01345584s Jun 19 13:27:41.837: INFO: Pod "client-can-connect-80-4g9vr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012181482s Jun 19 13:27:41.837: INFO: Pod "client-can-connect-80-4g9vr" satisfied condition "completed" Jun 19 13:27:41.837: INFO: Waiting for client-can-connect-80-4g9vr to complete. Jun 19 13:27:41.837: INFO: Waiting up to 5m0s for pod "client-can-connect-80-4g9vr" in namespace "e2e-network-policy-4773" to be "Succeeded or Failed" Jun 19 13:27:41.844: INFO: Pod "client-can-connect-80-4g9vr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.256583ms STEP: Saw pod success 06/19/23 13:27:41.844 Jun 19 13:27:41.844: INFO: Pod "client-can-connect-80-4g9vr" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-80-4g9vr 06/19/23 13:27:41.844 STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. 06/19/23 13:27:41.878 W0619 13:27:41.892288 2229 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 13:27:41.892: INFO: Waiting for client-can-connect-81-dh2n9 to complete. Jun 19 13:27:41.892: INFO: Waiting up to 3m0s for pod "client-can-connect-81-dh2n9" in namespace "e2e-network-policy-4773" to be "completed" Jun 19 13:27:41.898: INFO: Pod "client-can-connect-81-dh2n9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200433ms Jun 19 13:27:43.905: INFO: Pod "client-can-connect-81-dh2n9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012982581s Jun 19 13:27:45.906: INFO: Pod "client-can-connect-81-dh2n9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013737849s Jun 19 13:27:45.906: INFO: Pod "client-can-connect-81-dh2n9" satisfied condition "completed" Jun 19 13:27:45.906: INFO: Waiting for client-can-connect-81-dh2n9 to complete. Jun 19 13:27:45.906: INFO: Waiting up to 5m0s for pod "client-can-connect-81-dh2n9" in namespace "e2e-network-policy-4773" to be "Succeeded or Failed" Jun 19 13:27:45.912: INFO: Pod "client-can-connect-81-dh2n9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.976461ms STEP: Saw pod success 06/19/23 13:27:45.912 Jun 19 13:27:45.912: INFO: Pod "client-can-connect-81-dh2n9" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-81-dh2n9 06/19/23 13:27:45.912 [It] should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1477 STEP: Creating client-a which should not be able to contact the server. 06/19/23 13:27:45.961 STEP: Creating client pod client-a that should not be able to connect to svc-server. 06/19/23 13:27:45.961 W0619 13:27:45.978501 2229 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 13:27:45.978: INFO: Waiting for client-a-bjkxp to complete. Jun 19 13:27:45.978: INFO: Waiting up to 5m0s for pod "client-a-bjkxp" in namespace "e2e-network-policy-4773" to be "Succeeded or Failed" Jun 19 13:27:45.990: INFO: Pod "client-a-bjkxp": Phase="Pending", Reason="", readiness=false. Elapsed: 11.729276ms Jun 19 13:27:48.014: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 2.035643472s Jun 19 13:27:49.998: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 4.019396391s Jun 19 13:27:51.996: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 6.018119979s Jun 19 13:27:53.997: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 8.01923332s Jun 19 13:27:55.996: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 10.018000786s Jun 19 13:27:57.996: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 12.01788829s Jun 19 13:27:59.996: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 14.018308027s Jun 19 13:28:01.998: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 16.019374884s Jun 19 13:28:03.997: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 18.019256331s Jun 19 13:28:05.998: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 20.019969357s Jun 19 13:28:08.002: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 22.024092751s Jun 19 13:28:09.997: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 24.018482745s Jun 19 13:28:12.000: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 26.021410323s Jun 19 13:28:13.996: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 28.018225945s Jun 19 13:28:15.996: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 30.017908894s Jun 19 13:28:17.998: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 32.019916939s Jun 19 13:28:19.996: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 34.017782047s Jun 19 13:28:21.996: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 36.018026751s Jun 19 13:28:23.998: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 38.019697638s Jun 19 13:28:25.999: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 40.020493457s Jun 19 13:28:27.998: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 42.019798042s Jun 19 13:28:29.996: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 44.017620043s Jun 19 13:28:32.005: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=true. Elapsed: 46.026689972s Jun 19 13:28:34.010: INFO: Pod "client-a-bjkxp": Phase="Running", Reason="", readiness=false. Elapsed: 48.031892155s Jun 19 13:28:35.999: INFO: Pod "client-a-bjkxp": Phase="Failed", Reason="", readiness=false. Elapsed: 50.020560108s STEP: Cleaning up the pod client-a-bjkxp 06/19/23 13:28:35.999 STEP: Creating client-a which should now be able to contact the server. 06/19/23 13:28:36.035 STEP: Creating client pod client-a that should successfully connect to svc-server. 06/19/23 13:28:36.035 W0619 13:28:36.062212 2229 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 13:28:36.062: INFO: Waiting for client-a-hpzgz to complete. Jun 19 13:28:36.062: INFO: Waiting up to 3m0s for pod "client-a-hpzgz" in namespace "e2e-network-policy-4773" to be "completed" Jun 19 13:28:36.073: INFO: Pod "client-a-hpzgz": Phase="Pending", Reason="", readiness=false. Elapsed: 11.076909ms Jun 19 13:28:38.081: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 2.018765011s Jun 19 13:28:40.082: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 4.020529017s Jun 19 13:28:42.078: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 6.016669449s Jun 19 13:28:44.080: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 8.018470649s Jun 19 13:28:46.090: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 10.02769661s Jun 19 13:28:48.080: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 12.018591953s Jun 19 13:28:50.085: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 14.022769932s Jun 19 13:28:52.080: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 16.01806629s Jun 19 13:28:54.079: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 18.017505609s Jun 19 13:28:56.081: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 20.018684219s Jun 19 13:28:58.090: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 22.027775806s Jun 19 13:29:00.082: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 24.020323049s Jun 19 13:29:02.081: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 26.019682074s Jun 19 13:29:04.085: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 28.023335382s Jun 19 13:29:06.079: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 30.017402594s Jun 19 13:29:08.080: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 32.01845005s Jun 19 13:29:10.081: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 34.018848517s Jun 19 13:29:12.094: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 36.031834372s Jun 19 13:29:14.086: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 38.024125424s Jun 19 13:29:16.081: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 40.018719539s Jun 19 13:29:18.080: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 42.018102497s Jun 19 13:29:20.080: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 44.017909834s Jun 19 13:29:22.080: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=true. Elapsed: 46.017810966s Jun 19 13:29:24.079: INFO: Pod "client-a-hpzgz": Phase="Running", Reason="", readiness=false. Elapsed: 48.016836372s Jun 19 13:29:26.080: INFO: Pod "client-a-hpzgz": Phase="Failed", Reason="", readiness=false. Elapsed: 50.018491301s Jun 19 13:29:26.080: INFO: Pod "client-a-hpzgz" satisfied condition "completed" Jun 19 13:29:26.080: INFO: Waiting for client-a-hpzgz to complete. Jun 19 13:29:26.080: INFO: Waiting up to 5m0s for pod "client-a-hpzgz" in namespace "e2e-network-policy-4773" to be "Succeeded or Failed" Jun 19 13:29:26.087: INFO: Pod "client-a-hpzgz": Phase="Failed", Reason="", readiness=false. Elapsed: 6.807803ms Jun 19 13:29:26.098: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-4773 describe po client-a-hpzgz' Jun 19 13:29:26.252: INFO: stderr: "" Jun 19 13:29:26.252: INFO: stdout: "Name: client-a-hpzgz\nNamespace: e2e-network-policy-4773\nPriority: 0\nService Account: default\nNode: worker02/192.168.200.32\nStart Time: Mon, 19 Jun 2023 13:28:36 +0000\nLabels: pod-name=client-a\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::466\",\n \"10.128.8.154\"\n ],\n \"mac\": \"52:5c:e0:ab:fb:70\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::466\",\n \"10.128.8.154\"\n ],\n \"mac\": \"52:5c:e0:ab:fb:70\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Failed\nIP: 10.128.8.154\nIPs:\n IP: 10.128.8.154\n IP: fd00::466\nContainers:\n client:\n Container ID: cri-o://70bd01a66e1e8b02a0d759732bbff3ea1d48a7dfd2f105c411ded1a136bc3faa\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: \n Host Port: \n Command:\n /bin/sh\n Args:\n -c\n for i in $(seq 1 5); do /agnhost connect 172.30.212.56:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1\n State: Terminated\n Reason: Error\n Exit Code: 1\n Started: Mon, 19 Jun 2023 13:28:37 +0000\n Finished: Mon, 19 Jun 2023 13:29:22 +0000\n Ready: False\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7xqr7 (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-7xqr7:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-4773/client-a-hpzgz to worker02 by cp01\n Normal AddedInterface 50s multus Add eth0 [fd00::466/128 10.128.8.154/32] from cilium\n Normal Pulled 50s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 49s kubelet Created container client\n Normal Started 49s kubelet Started container client\n" Jun 19 13:29:26.252: INFO: Output of kubectl describe client-a-hpzgz: Name: client-a-hpzgz Namespace: e2e-network-policy-4773 Priority: 0 Service Account: default Node: worker02/192.168.200.32 Start Time: Mon, 19 Jun 2023 13:28:36 +0000 Labels: pod-name=client-a Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::466", "10.128.8.154" ], "mac": "52:5c:e0:ab:fb:70", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::466", "10.128.8.154" ], "mac": "52:5c:e0:ab:fb:70", "default": true, "dns": {} }] Status: Failed IP: 10.128.8.154 IPs: IP: 10.128.8.154 IP: fd00::466 Containers: client: Container ID: cri-o://70bd01a66e1e8b02a0d759732bbff3ea1d48a7dfd2f105c411ded1a136bc3faa Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: Host Port: Command: /bin/sh Args: -c for i in $(seq 1 5); do /agnhost connect 172.30.212.56:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1 State: Terminated Reason: Error Exit Code: 1 Started: Mon, 19 Jun 2023 13:28:37 +0000 Finished: Mon, 19 Jun 2023 13:29:22 +0000 Ready: False Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7xqr7 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-7xqr7: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-4773/client-a-hpzgz to worker02 by cp01 Normal AddedInterface 50s multus Add eth0 [fd00::466/128 10.128.8.154/32] from cilium Normal Pulled 50s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 49s kubelet Created container client Normal Started 49s kubelet Started container client Jun 19 13:29:26.252: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-4773 logs client-a-hpzgz --tail=100' Jun 19 13:29:26.405: INFO: stderr: "" Jun 19 13:29:26.405: INFO: stdout: "TIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\n" Jun 19 13:29:26.405: INFO: Last 100 log lines of client-a-hpzgz: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Jun 19 13:29:26.405: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-4773 describe po server-ds8ng' Jun 19 13:29:26.542: INFO: stderr: "" Jun 19 13:29:26.543: INFO: stdout: "Name: server-ds8ng\nNamespace: e2e-network-policy-4773\nPriority: 0\nService Account: default\nNode: worker03/192.168.200.33\nStart Time: Mon, 19 Jun 2023 13:27:35 +0000\nLabels: pod-name=server\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::50b\",\n \"10.128.11.25\"\n ],\n \"mac\": \"9a:e5:21:c7:0a:8b\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::50b\",\n \"10.128.11.25\"\n ],\n \"mac\": \"9a:e5:21:c7:0a:8b\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.11.25\nIPs:\n IP: 10.128.11.25\n IP: fd00::50b\nContainers:\n server-container-80:\n Container ID: cri-o://4ee6e7aa849a63fdb550b117fb9dcc5a21096bc855787cde69dacfb85738bbd4\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 19 Jun 2023 13:27:36 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lvnwn (ro)\n server-container-81:\n Container ID: cri-o://01e987dfb4b640e334bace6522775180506b85bec9e46c264b874d042ce2ccfa\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 81/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 19 Jun 2023 13:27:37 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_81: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lvnwn (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-lvnwn:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 110s default-scheduler Successfully assigned e2e-network-policy-4773/server-ds8ng to worker03 by cp01\n Normal AddedInterface 110s multus Add eth0 [fd00::50b/128 10.128.11.25/32] from cilium\n Normal Pulled 110s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 110s kubelet Created container server-container-80\n Normal Started 110s kubelet Started container server-container-80\n Normal Pulled 110s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 110s kubelet Created container server-container-81\n Normal Started 109s kubelet Started container server-container-81\n" Jun 19 13:29:26.543: INFO: Output of kubectl describe server-ds8ng: Name: server-ds8ng Namespace: e2e-network-policy-4773 Priority: 0 Service Account: default Node: worker03/192.168.200.33 Start Time: Mon, 19 Jun 2023 13:27:35 +0000 Labels: pod-name=server Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::50b", "10.128.11.25" ], "mac": "9a:e5:21:c7:0a:8b", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::50b", "10.128.11.25" ], "mac": "9a:e5:21:c7:0a:8b", "default": true, "dns": {} }] Status: Running IP: 10.128.11.25 IPs: IP: 10.128.11.25 IP: fd00::50b Containers: server-container-80: Container ID: cri-o://4ee6e7aa849a63fdb550b117fb9dcc5a21096bc855787cde69dacfb85738bbd4 Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 80/TCP Host Port: 0/TCP Args: porter State: Running Started: Mon, 19 Jun 2023 13:27:36 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_80: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lvnwn (ro) server-container-81: Container ID: cri-o://01e987dfb4b640e334bace6522775180506b85bec9e46c264b874d042ce2ccfa Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 81/TCP Host Port: 0/TCP Args: porter State: Running Started: Mon, 19 Jun 2023 13:27:37 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_81: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lvnwn (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-lvnwn: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 110s default-scheduler Successfully assigned e2e-network-policy-4773/server-ds8ng to worker03 by cp01 Normal AddedInterface 110s multus Add eth0 [fd00::50b/128 10.128.11.25/32] from cilium Normal Pulled 110s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 110s kubelet Created container server-container-80 Normal Started 110s kubelet Started container server-container-80 Normal Pulled 110s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 110s kubelet Created container server-container-81 Normal Started 109s kubelet Started container server-container-81 Jun 19 13:29:26.543: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-4773 logs server-ds8ng --tail=100' Jun 19 13:29:26.685: INFO: stderr: "Defaulted container \"server-container-80\" out of: server-container-80, server-container-81\n" Jun 19 13:29:26.685: INFO: stdout: "" Jun 19 13:29:26.685: INFO: Last 100 log lines of server-ds8ng: Jun 19 13:29:26.706: FAIL: Pod client-a-hpzgz should be able to connect to service svc-server, but was not able to connect. Pod logs: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Current NetworkPolicies: [{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-4773 edae6eb1-49fa-434b-9869-a61ac9212127 74796 1 2023-06-19 13:28:36 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-19 13:28:36 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.11.25/32,Except:[],}}]}] [Egress]} {[]}} {{ } {deny-client-a-via-except-cidr-egress-rule e2e-network-policy-4773 c8367b28-6b32-4c69-a872-873eb5f318d8 72426 1 2023-06-19 13:27:45 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-19 13:27:45 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.11.0/24,Except:[10.128.11.25/32],}}]}] [Egress]} {[]}}] Pods: [Pod: client-a-hpzgz, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:28:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:22 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:22 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:28:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.8.154,StartTime:2023-06-19 13:28:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-06-19 13:28:37 +0000 UTC,FinishedAt:2023-06-19 13:29:22 +0000 UTC,ContainerID:cri-o://70bd01a66e1e8b02a0d759732bbff3ea1d48a7dfd2f105c411ded1a136bc3faa,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://70bd01a66e1e8b02a0d759732bbff3ea1d48a7dfd2f105c411ded1a136bc3faa,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.154,},PodIP{IP:fd00::466,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-ds8ng, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:27:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:27:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:27:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:27:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.11.25,StartTime:2023-06-19 13:27:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 13:27:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://4ee6e7aa849a63fdb550b117fb9dcc5a21096bc855787cde69dacfb85738bbd4,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 13:27:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://01e987dfb4b640e334bace6522775180506b85bec9e46c264b874d042ce2ccfa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.11.25,},PodIP{IP:fd00::50b,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Full Stack Trace k8s.io/kubernetes/test/e2e/network/netpol.checkConnectivity(0xc001dd5e00, 0xc00186c420, 0xc0065e6d80, 0xc006045900) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941 +0x355 k8s.io/kubernetes/test/e2e/network/netpol.testCanConnectProtocol(0xc001dd5e00, 0xc00186c420, {0x8a33123, 0x8}, 0xc006045900, 0xc0021539e0?, {0x8a24aec, 0x3}) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1913 +0x1be k8s.io/kubernetes/test/e2e/network/netpol.testCanConnect(...) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1897 k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.29.2() k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1569 +0x47 github.com/onsi/ginkgo/v2.By({0x8c200aa, 0x41}, {0xc00649fe50, 0x1, 0x0?}) github.com/onsi/ginkgo/v2@v2.4.0/core_dsl.go:535 +0x525 k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.29() k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1568 +0xb5b github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0xc00624bce0, 0x0}) github.com/onsi/ginkgo/v2@v2.4.0/internal/node.go:449 +0x1b github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:757 +0x98 created by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:745 +0xe3d STEP: Cleaning up the pod client-a-hpzgz 06/19/23 13:29:26.706 STEP: Cleaning up the policy. 06/19/23 13:29:26.734 [AfterEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:96 STEP: Cleaning up the server. 06/19/23 13:29:26.75 STEP: Cleaning up the server's service. 06/19/23 13:29:26.767 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] dump namespaces | framework.go:196 STEP: dump namespace information after failure 06/19/23 13:29:26.845 STEP: Collecting events from namespace "e2e-network-policy-4773". 06/19/23 13:29:26.845 STEP: Found 30 events. 06/19/23 13:29:26.852 Jun 19 13:29:26.852: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-bjkxp: { } Scheduled: Successfully assigned e2e-network-policy-4773/client-a-bjkxp to worker01 by cp01 Jun 19 13:29:26.852: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-hpzgz: { } Scheduled: Successfully assigned e2e-network-policy-4773/client-a-hpzgz to worker02 by cp01 Jun 19 13:29:26.852: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-4g9vr: { } Scheduled: Successfully assigned e2e-network-policy-4773/client-can-connect-80-4g9vr to worker03 by cp01 Jun 19 13:29:26.852: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-dh2n9: { } Scheduled: Successfully assigned e2e-network-policy-4773/client-can-connect-81-dh2n9 to worker01 by cp01 Jun 19 13:29:26.852: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-ds8ng: { } Scheduled: Successfully assigned e2e-network-policy-4773/server-ds8ng to worker03 by cp01 Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:36 +0000 UTC - event for server-ds8ng: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:36 +0000 UTC - event for server-ds8ng: {kubelet worker03} Created: Created container server-container-80 Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:36 +0000 UTC - event for server-ds8ng: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:36 +0000 UTC - event for server-ds8ng: {multus } AddedInterface: Add eth0 [fd00::50b/128 10.128.11.25/32] from cilium Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:36 +0000 UTC - event for server-ds8ng: {kubelet worker03} Started: Started container server-container-80 Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:36 +0000 UTC - event for server-ds8ng: {kubelet worker03} Created: Created container server-container-81 Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:37 +0000 UTC - event for server-ds8ng: {kubelet worker03} Started: Started container server-container-81 Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:38 +0000 UTC - event for client-can-connect-80-4g9vr: {multus } AddedInterface: Add eth0 [fd00::52b/128 10.128.11.115/32] from cilium Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:38 +0000 UTC - event for client-can-connect-80-4g9vr: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:38 +0000 UTC - event for client-can-connect-80-4g9vr: {kubelet worker03} Created: Created container client Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:39 +0000 UTC - event for client-can-connect-80-4g9vr: {kubelet worker03} Started: Started container client Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:42 +0000 UTC - event for client-can-connect-81-dh2n9: {multus } AddedInterface: Add eth0 [fd00::38d/128 10.128.6.125/32] from cilium Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:42 +0000 UTC - event for client-can-connect-81-dh2n9: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:43 +0000 UTC - event for client-can-connect-81-dh2n9: {kubelet worker01} Created: Created container client Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:43 +0000 UTC - event for client-can-connect-81-dh2n9: {kubelet worker01} Started: Started container client Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:46 +0000 UTC - event for client-a-bjkxp: {multus } AddedInterface: Add eth0 [fd00::3b5/128 10.128.7.137/32] from cilium Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:46 +0000 UTC - event for client-a-bjkxp: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:47 +0000 UTC - event for client-a-bjkxp: {kubelet worker01} Created: Created container client Jun 19 13:29:26.852: INFO: At 2023-06-19 13:27:47 +0000 UTC - event for client-a-bjkxp: {kubelet worker01} Started: Started container client Jun 19 13:29:26.852: INFO: At 2023-06-19 13:28:36 +0000 UTC - event for client-a-hpzgz: {multus } AddedInterface: Add eth0 [fd00::466/128 10.128.8.154/32] from cilium Jun 19 13:29:26.852: INFO: At 2023-06-19 13:28:36 +0000 UTC - event for client-a-hpzgz: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 13:29:26.852: INFO: At 2023-06-19 13:28:37 +0000 UTC - event for client-a-hpzgz: {kubelet worker02} Created: Created container client Jun 19 13:29:26.852: INFO: At 2023-06-19 13:28:37 +0000 UTC - event for client-a-hpzgz: {kubelet worker02} Started: Started container client Jun 19 13:29:26.852: INFO: At 2023-06-19 13:29:26 +0000 UTC - event for server-ds8ng: {kubelet worker03} Killing: Stopping container server-container-80 Jun 19 13:29:26.852: INFO: At 2023-06-19 13:29:26 +0000 UTC - event for server-ds8ng: {kubelet worker03} Killing: Stopping container server-container-81 Jun 19 13:29:26.857: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 13:29:26.857: INFO: server-ds8ng worker03 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 13:27:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 13:27:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 13:27:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 13:27:35 +0000 UTC }] Jun 19 13:29:26.857: INFO: Jun 19 13:29:26.865: INFO: skipping dumping cluster info - cluster too large [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] tear down framework | framework.go:193 STEP: Destroying namespace "e2e-network-policy-4773" for this suite. 06/19/23 13:29:26.865 fail [k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941]: Jun 19 13:29:26.706: Pod client-a-hpzgz should be able to connect to service svc-server, but was not able to connect. Pod logs: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Current NetworkPolicies: [{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-4773 edae6eb1-49fa-434b-9869-a61ac9212127 74796 1 2023-06-19 13:28:36 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-19 13:28:36 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.11.25/32,Except:[],}}]}] [Egress]} {[]}} {{ } {deny-client-a-via-except-cidr-egress-rule e2e-network-policy-4773 c8367b28-6b32-4c69-a872-873eb5f318d8 72426 1 2023-06-19 13:27:45 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-19 13:27:45 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.11.0/24,Except:[10.128.11.25/32],}}]}] [Egress]} {[]}}] Pods: [Pod: client-a-hpzgz, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:28:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:22 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:22 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:28:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.8.154,StartTime:2023-06-19 13:28:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-06-19 13:28:37 +0000 UTC,FinishedAt:2023-06-19 13:29:22 +0000 UTC,ContainerID:cri-o://70bd01a66e1e8b02a0d759732bbff3ea1d48a7dfd2f105c411ded1a136bc3faa,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://70bd01a66e1e8b02a0d759732bbff3ea1d48a7dfd2f105c411ded1a136bc3faa,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.154,},PodIP{IP:fd00::466,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-ds8ng, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:27:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:27:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:27:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:27:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.11.25,StartTime:2023-06-19 13:27:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 13:27:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://4ee6e7aa849a63fdb550b117fb9dcc5a21096bc855787cde69dacfb85738bbd4,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 13:27:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://01e987dfb4b640e334bace6522775180506b85bec9e46c264b874d042ce2ccfa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.11.25,},PodIP{IP:fd00::50b,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Ginkgo exit error 1: exit with code 1 failed: (1m53s) 2023-06-19T13:29:26 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/63/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (18.7s) 2023-06-19T13:29:28 "[sig-network] DNS should provide DNS for services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/64/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m14s) 2023-06-19T13:29:29 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 1/65/67 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (16.6s) 2023-06-19T13:29:31 "[sig-network] DNS should provide DNS for ExternalName services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" started: 1/66/67 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (3.5s) 2023-06-19T13:29:34 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (28.9s) 2023-06-19T13:29:58 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]" passed: (1m10s) 2023-06-19T13:30:06 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m4s) 2023-06-19T13:30:11 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m30s) 2023-06-19T13:30:15 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (1m10s) 2023-06-19T13:30:36 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (2m4s) 2023-06-19T13:31:21 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" Jun 19 13:29:29.048: INFO: Enabling in-tree volume drivers [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/framework.go:1496 [BeforeEach] TOP-LEVEL github.com/openshift/origin/test/extended/util/test.go:58 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] set up framework | framework.go:178 STEP: Creating a kubernetes client 06/19/23 13:29:29.871 STEP: Building a namespace api object, basename network-policy 06/19/23 13:29:29.873 Jun 19 13:29:29.919: INFO: About to run a Kube e2e test, ensuring namespace is privileged STEP: Waiting for a default service account to be provisioned in namespace 06/19/23 13:29:30.121 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 06/19/23 13:29:30.126 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:31 [BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:72 [BeforeEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:78 STEP: Creating a simple server that serves on port 80 and 81. 06/19/23 13:29:30.13 STEP: Creating a server pod server in namespace e2e-network-policy-754 06/19/23 13:29:30.131 W0619 13:29:30.148870 4117 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "server-container-80", "server-container-81" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "server-container-80", "server-container-81" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "server-container-80", "server-container-81" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "server-container-80", "server-container-81" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 13:29:30.149: INFO: Created pod server-pllpz STEP: Creating a service svc-server for pod server in namespace e2e-network-policy-754 06/19/23 13:29:30.149 Jun 19 13:29:30.183: INFO: Created service svc-server STEP: Waiting for pod ready 06/19/23 13:29:30.183 Jun 19 13:29:30.183: INFO: Waiting up to 5m0s for pod "server-pllpz" in namespace "e2e-network-policy-754" to be "running and ready" Jun 19 13:29:30.208: INFO: Pod "server-pllpz": Phase="Pending", Reason="", readiness=false. Elapsed: 25.023131ms Jun 19 13:29:30.208: INFO: The phase of Pod server-pllpz is Pending, waiting for it to be Running (with Ready = true) Jun 19 13:29:32.219: INFO: Pod "server-pllpz": Phase="Running", Reason="", readiness=true. Elapsed: 2.035724874s Jun 19 13:29:32.219: INFO: The phase of Pod server-pllpz is Running (Ready = true) Jun 19 13:29:32.219: INFO: Pod "server-pllpz" satisfied condition "running and ready" STEP: Testing pods can connect to both ports when no policy is present. 06/19/23 13:29:32.219 STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. 06/19/23 13:29:32.219 W0619 13:29:32.233349 4117 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 13:29:32.233: INFO: Waiting for client-can-connect-80-vsqpt to complete. Jun 19 13:29:32.233: INFO: Waiting up to 3m0s for pod "client-can-connect-80-vsqpt" in namespace "e2e-network-policy-754" to be "completed" Jun 19 13:29:32.246: INFO: Pod "client-can-connect-80-vsqpt": Phase="Pending", Reason="", readiness=false. Elapsed: 13.046725ms Jun 19 13:29:34.253: INFO: Pod "client-can-connect-80-vsqpt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020393518s Jun 19 13:29:36.254: INFO: Pod "client-can-connect-80-vsqpt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021030168s Jun 19 13:29:38.255: INFO: Pod "client-can-connect-80-vsqpt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02155983s Jun 19 13:29:38.255: INFO: Pod "client-can-connect-80-vsqpt" satisfied condition "completed" Jun 19 13:29:38.255: INFO: Waiting for client-can-connect-80-vsqpt to complete. Jun 19 13:29:38.255: INFO: Waiting up to 5m0s for pod "client-can-connect-80-vsqpt" in namespace "e2e-network-policy-754" to be "Succeeded or Failed" Jun 19 13:29:38.259: INFO: Pod "client-can-connect-80-vsqpt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.522347ms STEP: Saw pod success 06/19/23 13:29:38.259 Jun 19 13:29:38.259: INFO: Pod "client-can-connect-80-vsqpt" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-80-vsqpt 06/19/23 13:29:38.259 STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. 06/19/23 13:29:38.303 W0619 13:29:38.321224 4117 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 13:29:38.321: INFO: Waiting for client-can-connect-81-2n9qd to complete. Jun 19 13:29:38.321: INFO: Waiting up to 3m0s for pod "client-can-connect-81-2n9qd" in namespace "e2e-network-policy-754" to be "completed" Jun 19 13:29:38.327: INFO: Pod "client-can-connect-81-2n9qd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.821169ms Jun 19 13:29:40.337: INFO: Pod "client-can-connect-81-2n9qd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016441279s Jun 19 13:29:42.335: INFO: Pod "client-can-connect-81-2n9qd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013642877s Jun 19 13:29:42.335: INFO: Pod "client-can-connect-81-2n9qd" satisfied condition "completed" Jun 19 13:29:42.335: INFO: Waiting for client-can-connect-81-2n9qd to complete. Jun 19 13:29:42.335: INFO: Waiting up to 5m0s for pod "client-can-connect-81-2n9qd" in namespace "e2e-network-policy-754" to be "Succeeded or Failed" Jun 19 13:29:42.343: INFO: Pod "client-can-connect-81-2n9qd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.815247ms STEP: Saw pod success 06/19/23 13:29:42.343 Jun 19 13:29:42.344: INFO: Pod "client-can-connect-81-2n9qd" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-can-connect-81-2n9qd 06/19/23 13:29:42.344 [It] should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s] k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1343 STEP: Creating a server pod pod-b in namespace e2e-network-policy-754 06/19/23 13:29:42.406 W0619 13:29:42.426685 4117 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-b-container-80" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-b-container-80" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-b-container-80" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-b-container-80" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 13:29:42.426: INFO: Created pod pod-b-ctd6s STEP: Creating a service svc-pod-b for pod pod-b in namespace e2e-network-policy-754 06/19/23 13:29:42.426 Jun 19 13:29:42.474: INFO: Created service svc-pod-b STEP: Waiting for pod-b to be ready 06/19/23 13:29:42.474 Jun 19 13:29:42.474: INFO: Waiting up to 5m0s for pod "pod-b-ctd6s" in namespace "e2e-network-policy-754" to be "running and ready" Jun 19 13:29:42.482: INFO: Pod "pod-b-ctd6s": Phase="Pending", Reason="", readiness=false. Elapsed: 7.869966ms Jun 19 13:29:42.482: INFO: The phase of Pod pod-b-ctd6s is Pending, waiting for it to be Running (with Ready = true) Jun 19 13:29:44.489: INFO: Pod "pod-b-ctd6s": Phase="Running", Reason="", readiness=true. Elapsed: 2.01480204s Jun 19 13:29:44.489: INFO: The phase of Pod pod-b-ctd6s is Running (Ready = true) Jun 19 13:29:44.489: INFO: Pod "pod-b-ctd6s" satisfied condition "running and ready" Jun 19 13:29:44.489: INFO: Waiting up to 5m0s for pod "pod-b-ctd6s" in namespace "e2e-network-policy-754" to be "running" Jun 19 13:29:44.493: INFO: Pod "pod-b-ctd6s": Phase="Running", Reason="", readiness=true. Elapsed: 4.107198ms Jun 19 13:29:44.493: INFO: Pod "pod-b-ctd6s" satisfied condition "running" STEP: Creating client-a which should be able to contact the server-b. 06/19/23 13:29:44.493 STEP: Creating client pod client-a that should successfully connect to svc-pod-b. 06/19/23 13:29:44.493 W0619 13:29:44.501937 4117 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 13:29:44.502: INFO: Waiting for client-a-lmxrn to complete. Jun 19 13:29:44.502: INFO: Waiting up to 3m0s for pod "client-a-lmxrn" in namespace "e2e-network-policy-754" to be "completed" Jun 19 13:29:44.509: INFO: Pod "client-a-lmxrn": Phase="Pending", Reason="", readiness=false. Elapsed: 7.65213ms Jun 19 13:29:46.518: INFO: Pod "client-a-lmxrn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015984098s Jun 19 13:29:48.515: INFO: Pod "client-a-lmxrn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013789144s Jun 19 13:29:48.515: INFO: Pod "client-a-lmxrn" satisfied condition "completed" Jun 19 13:29:48.515: INFO: Waiting for client-a-lmxrn to complete. Jun 19 13:29:48.515: INFO: Waiting up to 5m0s for pod "client-a-lmxrn" in namespace "e2e-network-policy-754" to be "Succeeded or Failed" Jun 19 13:29:48.521: INFO: Pod "client-a-lmxrn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.323522ms STEP: Saw pod success 06/19/23 13:29:48.521 Jun 19 13:29:48.521: INFO: Pod "client-a-lmxrn" satisfied condition "Succeeded or Failed" STEP: Cleaning up the pod client-a-lmxrn 06/19/23 13:29:48.521 STEP: Creating client-a which should not be able to contact the server-b. 06/19/23 13:29:48.554 STEP: Creating client pod client-a that should not be able to connect to svc-pod-b. 06/19/23 13:29:48.554 W0619 13:29:48.568118 4117 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 13:29:48.568: INFO: Waiting for client-a-grdn2 to complete. Jun 19 13:29:48.568: INFO: Waiting up to 5m0s for pod "client-a-grdn2" in namespace "e2e-network-policy-754" to be "Succeeded or Failed" Jun 19 13:29:48.580: INFO: Pod "client-a-grdn2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.100251ms Jun 19 13:29:50.588: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 2.019953526s Jun 19 13:29:52.620: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 4.052077188s Jun 19 13:29:54.586: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 6.01791638s Jun 19 13:29:56.586: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 8.018550891s Jun 19 13:29:58.595: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 10.027512435s Jun 19 13:30:00.587: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 12.019594522s Jun 19 13:30:02.587: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 14.019488893s Jun 19 13:30:04.586: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 16.018531301s Jun 19 13:30:06.587: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 18.019650606s Jun 19 13:30:08.586: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 20.018677155s Jun 19 13:30:10.587: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 22.019534081s Jun 19 13:30:12.588: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 24.019982379s Jun 19 13:30:14.586: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 26.017770404s Jun 19 13:30:16.586: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 28.018667007s Jun 19 13:30:18.587: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 30.01901023s Jun 19 13:30:20.586: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 32.018666436s Jun 19 13:30:22.587: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 34.019468829s Jun 19 13:30:24.643: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 36.075584412s Jun 19 13:30:26.587: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 38.01967722s Jun 19 13:30:28.588: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 40.020163152s Jun 19 13:30:30.587: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 42.018726629s Jun 19 13:30:32.587: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 44.019700235s Jun 19 13:30:34.585: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=true. Elapsed: 46.017502244s Jun 19 13:30:36.587: INFO: Pod "client-a-grdn2": Phase="Running", Reason="", readiness=false. Elapsed: 48.019489524s Jun 19 13:30:38.587: INFO: Pod "client-a-grdn2": Phase="Failed", Reason="", readiness=false. Elapsed: 50.019634199s STEP: Cleaning up the pod client-a-grdn2 06/19/23 13:30:38.588 STEP: Creating client-a which should be able to contact the server. 06/19/23 13:30:38.608 STEP: Creating client pod client-a that should successfully connect to svc-server. 06/19/23 13:30:38.608 W0619 13:30:38.629790 4117 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Jun 19 13:30:38.629: INFO: Waiting for client-a-zq9s9 to complete. Jun 19 13:30:38.629: INFO: Waiting up to 3m0s for pod "client-a-zq9s9" in namespace "e2e-network-policy-754" to be "completed" Jun 19 13:30:38.634: INFO: Pod "client-a-zq9s9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.850297ms Jun 19 13:30:40.642: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 2.012137348s Jun 19 13:30:42.639: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 4.009472153s Jun 19 13:30:44.641: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 6.011554157s Jun 19 13:30:46.649: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 8.019119329s Jun 19 13:30:48.641: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 10.011189514s Jun 19 13:30:50.641: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 12.011224768s Jun 19 13:30:52.642: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 14.012329543s Jun 19 13:30:54.642: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 16.012424204s Jun 19 13:30:56.641: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 18.011816346s Jun 19 13:30:58.642: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 20.01281852s Jun 19 13:31:00.641: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 22.011908676s Jun 19 13:31:02.642: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 24.013002037s Jun 19 13:31:04.640: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 26.010565176s Jun 19 13:31:06.641: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 28.011683402s Jun 19 13:31:08.641: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 30.012036842s Jun 19 13:31:10.641: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 32.011708458s Jun 19 13:31:12.640: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 34.010987088s Jun 19 13:31:14.640: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 36.010185256s Jun 19 13:31:16.641: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 38.011895299s Jun 19 13:31:18.645: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 40.01529485s Jun 19 13:31:20.641: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 42.011621323s Jun 19 13:31:22.647: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 44.017392539s Jun 19 13:31:24.641: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=true. Elapsed: 46.011158035s Jun 19 13:31:26.640: INFO: Pod "client-a-zq9s9": Phase="Running", Reason="", readiness=false. Elapsed: 48.010369748s Jun 19 13:31:28.641: INFO: Pod "client-a-zq9s9": Phase="Failed", Reason="", readiness=false. Elapsed: 50.011685577s Jun 19 13:31:28.641: INFO: Pod "client-a-zq9s9" satisfied condition "completed" Jun 19 13:31:28.641: INFO: Waiting for client-a-zq9s9 to complete. Jun 19 13:31:28.641: INFO: Waiting up to 5m0s for pod "client-a-zq9s9" in namespace "e2e-network-policy-754" to be "Succeeded or Failed" Jun 19 13:31:28.647: INFO: Pod "client-a-zq9s9": Phase="Failed", Reason="", readiness=false. Elapsed: 6.189179ms Jun 19 13:31:28.652: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-754 describe po client-a-zq9s9' Jun 19 13:31:28.803: INFO: stderr: "" Jun 19 13:31:28.803: INFO: stdout: "Name: client-a-zq9s9\nNamespace: e2e-network-policy-754\nPriority: 0\nService Account: default\nNode: worker03/192.168.200.33\nStart Time: Mon, 19 Jun 2023 13:30:38 +0000\nLabels: pod-name=client-a\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::5fd\",\n \"10.128.10.17\"\n ],\n \"mac\": \"ae:5e:e6:2c:9f:d4\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::5fd\",\n \"10.128.10.17\"\n ],\n \"mac\": \"ae:5e:e6:2c:9f:d4\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Failed\nIP: 10.128.10.17\nIPs:\n IP: 10.128.10.17\n IP: fd00::5fd\nContainers:\n client:\n Container ID: cri-o://fcab7edb76fab13d82840d6a201fd5fa4e9136602ddc7d1c2fb4d205d6999135\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: \n Host Port: \n Command:\n /bin/sh\n Args:\n -c\n for i in $(seq 1 5); do /agnhost connect 172.30.253.116:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1\n State: Terminated\n Reason: Error\n Exit Code: 1\n Started: Mon, 19 Jun 2023 13:30:39 +0000\n Finished: Mon, 19 Jun 2023 13:31:24 +0000\n Ready: False\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-swr8z (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-swr8z:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-754/client-a-zq9s9 to worker03 by cp01\n Normal AddedInterface 49s multus Add eth0 [fd00::5fd/128 10.128.10.17/32] from cilium\n Normal Pulled 49s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 49s kubelet Created container client\n Normal Started 49s kubelet Started container client\n" Jun 19 13:31:28.803: INFO: Output of kubectl describe client-a-zq9s9: Name: client-a-zq9s9 Namespace: e2e-network-policy-754 Priority: 0 Service Account: default Node: worker03/192.168.200.33 Start Time: Mon, 19 Jun 2023 13:30:38 +0000 Labels: pod-name=client-a Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::5fd", "10.128.10.17" ], "mac": "ae:5e:e6:2c:9f:d4", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::5fd", "10.128.10.17" ], "mac": "ae:5e:e6:2c:9f:d4", "default": true, "dns": {} }] Status: Failed IP: 10.128.10.17 IPs: IP: 10.128.10.17 IP: fd00::5fd Containers: client: Container ID: cri-o://fcab7edb76fab13d82840d6a201fd5fa4e9136602ddc7d1c2fb4d205d6999135 Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: Host Port: Command: /bin/sh Args: -c for i in $(seq 1 5); do /agnhost connect 172.30.253.116:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1 State: Terminated Reason: Error Exit Code: 1 Started: Mon, 19 Jun 2023 13:30:39 +0000 Finished: Mon, 19 Jun 2023 13:31:24 +0000 Ready: False Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-swr8z (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-swr8z: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-754/client-a-zq9s9 to worker03 by cp01 Normal AddedInterface 49s multus Add eth0 [fd00::5fd/128 10.128.10.17/32] from cilium Normal Pulled 49s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 49s kubelet Created container client Normal Started 49s kubelet Started container client Jun 19 13:31:28.803: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-754 logs client-a-zq9s9 --tail=100' Jun 19 13:31:28.952: INFO: stderr: "" Jun 19 13:31:28.952: INFO: stdout: "TIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\n" Jun 19 13:31:28.952: INFO: Last 100 log lines of client-a-zq9s9: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Jun 19 13:31:28.952: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-754 describe po pod-b-ctd6s' Jun 19 13:31:29.095: INFO: stderr: "" Jun 19 13:31:29.095: INFO: stdout: "Name: pod-b-ctd6s\nNamespace: e2e-network-policy-754\nPriority: 0\nService Account: default\nNode: worker03/192.168.200.33\nStart Time: Mon, 19 Jun 2023 13:29:42 +0000\nLabels: pod-name=pod-b\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::5da\",\n \"10.128.11.21\"\n ],\n \"mac\": \"4e:cb:b6:f8:55:6e\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::5da\",\n \"10.128.11.21\"\n ],\n \"mac\": \"4e:cb:b6:f8:55:6e\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.11.21\nIPs:\n IP: 10.128.11.21\n IP: fd00::5da\nContainers:\n pod-b-container-80:\n Container ID: cri-o://e60f789faf04a1e185c963dbf7416bce6808e41e9319b8bb411ce56d427239f9\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 19 Jun 2023 13:29:43 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zwdmz (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-zwdmz:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 106s default-scheduler Successfully assigned e2e-network-policy-754/pod-b-ctd6s to worker03 by cp01\n Normal AddedInterface 106s multus Add eth0 [fd00::5da/128 10.128.11.21/32] from cilium\n Normal Pulled 106s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 106s kubelet Created container pod-b-container-80\n Normal Started 106s kubelet Started container pod-b-container-80\n" Jun 19 13:31:29.095: INFO: Output of kubectl describe pod-b-ctd6s: Name: pod-b-ctd6s Namespace: e2e-network-policy-754 Priority: 0 Service Account: default Node: worker03/192.168.200.33 Start Time: Mon, 19 Jun 2023 13:29:42 +0000 Labels: pod-name=pod-b Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::5da", "10.128.11.21" ], "mac": "4e:cb:b6:f8:55:6e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::5da", "10.128.11.21" ], "mac": "4e:cb:b6:f8:55:6e", "default": true, "dns": {} }] Status: Running IP: 10.128.11.21 IPs: IP: 10.128.11.21 IP: fd00::5da Containers: pod-b-container-80: Container ID: cri-o://e60f789faf04a1e185c963dbf7416bce6808e41e9319b8bb411ce56d427239f9 Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 80/TCP Host Port: 0/TCP Args: porter State: Running Started: Mon, 19 Jun 2023 13:29:43 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_80: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zwdmz (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-zwdmz: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 106s default-scheduler Successfully assigned e2e-network-policy-754/pod-b-ctd6s to worker03 by cp01 Normal AddedInterface 106s multus Add eth0 [fd00::5da/128 10.128.11.21/32] from cilium Normal Pulled 106s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 106s kubelet Created container pod-b-container-80 Normal Started 106s kubelet Started container pod-b-container-80 Jun 19 13:31:29.095: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-754 logs pod-b-ctd6s --tail=100' Jun 19 13:31:29.244: INFO: stderr: "" Jun 19 13:31:29.244: INFO: stdout: "" Jun 19 13:31:29.244: INFO: Last 100 log lines of pod-b-ctd6s: Jun 19 13:31:29.244: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-754 describe po server-pllpz' Jun 19 13:31:29.409: INFO: stderr: "" Jun 19 13:31:29.409: INFO: stdout: "Name: server-pllpz\nNamespace: e2e-network-policy-754\nPriority: 0\nService Account: default\nNode: worker02/192.168.200.32\nStart Time: Mon, 19 Jun 2023 13:29:30 +0000\nLabels: pod-name=server\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::472\",\n \"10.128.9.23\"\n ],\n \"mac\": \"82:5f:b3:78:f1:ac\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::472\",\n \"10.128.9.23\"\n ],\n \"mac\": \"82:5f:b3:78:f1:ac\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.9.23\nIPs:\n IP: 10.128.9.23\n IP: fd00::472\nContainers:\n server-container-80:\n Container ID: cri-o://2bfe08e765520741b356872e6dda160dc2923a838ca27a4b4aa8a549f28e39e8\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 19 Jun 2023 13:29:31 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-26g22 (ro)\n server-container-81:\n Container ID: cri-o://bb855b74255b12ba3134b55e15fa8e613def26f573860941d3e8449e00dc8543\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 81/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 19 Jun 2023 13:29:31 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_81: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-26g22 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-26g22:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: \nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 119s default-scheduler Successfully assigned e2e-network-policy-754/server-pllpz to worker02 by cp01\n Normal AddedInterface 119s multus Add eth0 [fd00::472/128 10.128.9.23/32] from cilium\n Normal Pulled 119s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 118s kubelet Created container server-container-80\n Normal Started 118s kubelet Started container server-container-80\n Normal Pulled 118s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 118s kubelet Created container server-container-81\n Normal Started 118s kubelet Started container server-container-81\n" Jun 19 13:31:29.409: INFO: Output of kubectl describe server-pllpz: Name: server-pllpz Namespace: e2e-network-policy-754 Priority: 0 Service Account: default Node: worker02/192.168.200.32 Start Time: Mon, 19 Jun 2023 13:29:30 +0000 Labels: pod-name=server Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::472", "10.128.9.23" ], "mac": "82:5f:b3:78:f1:ac", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "cilium", "interface": "eth0", "ips": [ "fd00::472", "10.128.9.23" ], "mac": "82:5f:b3:78:f1:ac", "default": true, "dns": {} }] Status: Running IP: 10.128.9.23 IPs: IP: 10.128.9.23 IP: fd00::472 Containers: server-container-80: Container ID: cri-o://2bfe08e765520741b356872e6dda160dc2923a838ca27a4b4aa8a549f28e39e8 Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 80/TCP Host Port: 0/TCP Args: porter State: Running Started: Mon, 19 Jun 2023 13:29:31 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_80: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-26g22 (ro) server-container-81: Container ID: cri-o://bb855b74255b12ba3134b55e15fa8e613def26f573860941d3e8449e00dc8543 Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5- Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e Port: 81/TCP Host Port: 0/TCP Args: porter State: Running Started: Mon, 19 Jun 2023 13:29:31 +0000 Ready: True Restart Count: 0 Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: SERVE_PORT_81: foo Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-26g22 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-26g22: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 119s default-scheduler Successfully assigned e2e-network-policy-754/server-pllpz to worker02 by cp01 Normal AddedInterface 119s multus Add eth0 [fd00::472/128 10.128.9.23/32] from cilium Normal Pulled 119s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 118s kubelet Created container server-container-80 Normal Started 118s kubelet Started container server-container-80 Normal Pulled 118s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Normal Created 118s kubelet Created container server-container-81 Normal Started 118s kubelet Started container server-container-81 Jun 19 13:31:29.409: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-754 logs server-pllpz --tail=100' Jun 19 13:31:29.557: INFO: stderr: "Defaulted container \"server-container-80\" out of: server-container-80, server-container-81\n" Jun 19 13:31:29.557: INFO: stdout: "" Jun 19 13:31:29.557: INFO: Last 100 log lines of server-pllpz: Jun 19 13:31:29.583: FAIL: Pod client-a-zq9s9 should be able to connect to service svc-server, but was not able to connect. Pod logs: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Current NetworkPolicies: [{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-754 f4f16437-77b1-4fae-ad0c-03bf43cb20eb 78151 1 2023-06-19 13:29:48 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-19 13:29:48 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.9.23/32,Except:[],}}]}] [Egress]} {[]}}] Pods: [Pod: client-a-zq9s9, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:30:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:31:25 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:31:25 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:30:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.10.17,StartTime:2023-06-19 13:30:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-06-19 13:30:39 +0000 UTC,FinishedAt:2023-06-19 13:31:24 +0000 UTC,ContainerID:cri-o://fcab7edb76fab13d82840d6a201fd5fa4e9136602ddc7d1c2fb4d205d6999135,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://fcab7edb76fab13d82840d6a201fd5fa4e9136602ddc7d1c2fb4d205d6999135,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.10.17,},PodIP{IP:fd00::5fd,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: pod-b-ctd6s, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.11.21,StartTime:2023-06-19 13:29:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pod-b-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 13:29:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://e60f789faf04a1e185c963dbf7416bce6808e41e9319b8bb411ce56d427239f9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.11.21,},PodIP{IP:fd00::5da,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-pllpz, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.9.23,StartTime:2023-06-19 13:29:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 13:29:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://2bfe08e765520741b356872e6dda160dc2923a838ca27a4b4aa8a549f28e39e8,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 13:29:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://bb855b74255b12ba3134b55e15fa8e613def26f573860941d3e8449e00dc8543,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.9.23,},PodIP{IP:fd00::472,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Full Stack Trace k8s.io/kubernetes/test/e2e/network/netpol.checkConnectivity(0xc001eb2780, 0xc001ccc580, 0xc006ef7200, 0xc0068a8780) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941 +0x355 k8s.io/kubernetes/test/e2e/network/netpol.testCanConnectProtocol(0xc001eb2780, 0xc001ccc580, {0x8a33123, 0x8}, 0xc0068a8780, 0xc001ed0ba0?, {0x8a24aec, 0x3}) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1913 +0x1be k8s.io/kubernetes/test/e2e/network/netpol.testCanConnect(...) k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1897 k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.27.4() k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1410 +0x47 github.com/onsi/ginkgo/v2.By({0x8c00310, 0x3d}, {0xc006627e50, 0x1, 0x0?}) github.com/onsi/ginkgo/v2@v2.4.0/core_dsl.go:535 +0x525 k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.27() k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1409 +0x8fc github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2e8b77e, 0xc000ab4c00}) github.com/onsi/ginkgo/v2@v2.4.0/internal/node.go:449 +0x1b github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2() github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:757 +0x98 created by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:745 +0xe3d STEP: Cleaning up the pod client-a-zq9s9 06/19/23 13:31:29.583 STEP: Cleaning up the policy. 06/19/23 13:31:29.609 STEP: Cleaning up the server. 06/19/23 13:31:29.623 STEP: Cleaning up the server's service. 06/19/23 13:31:29.639 [AfterEach] NetworkPolicy between server and client k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:96 STEP: Cleaning up the server. 06/19/23 13:31:29.695 STEP: Cleaning up the server's service. 06/19/23 13:31:29.713 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:33 [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] dump namespaces | framework.go:196 STEP: dump namespace information after failure 06/19/23 13:31:29.77 STEP: Collecting events from namespace "e2e-network-policy-754". 06/19/23 13:31:29.77 STEP: Found 41 events. 06/19/23 13:31:29.781 Jun 19 13:31:29.781: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-grdn2: { } Scheduled: Successfully assigned e2e-network-policy-754/client-a-grdn2 to worker01 by cp01 Jun 19 13:31:29.781: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-lmxrn: { } Scheduled: Successfully assigned e2e-network-policy-754/client-a-lmxrn to worker01 by cp01 Jun 19 13:31:29.781: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-zq9s9: { } Scheduled: Successfully assigned e2e-network-policy-754/client-a-zq9s9 to worker03 by cp01 Jun 19 13:31:29.781: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-vsqpt: { } Scheduled: Successfully assigned e2e-network-policy-754/client-can-connect-80-vsqpt to worker03 by cp01 Jun 19 13:31:29.781: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-2n9qd: { } Scheduled: Successfully assigned e2e-network-policy-754/client-can-connect-81-2n9qd to worker03 by cp01 Jun 19 13:31:29.781: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-b-ctd6s: { } Scheduled: Successfully assigned e2e-network-policy-754/pod-b-ctd6s to worker03 by cp01 Jun 19 13:31:29.781: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-pllpz: { } Scheduled: Successfully assigned e2e-network-policy-754/server-pllpz to worker02 by cp01 Jun 19 13:31:29.781: INFO: At 2023-06-19 13:29:30 +0000 UTC - event for server-pllpz: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 13:31:29.781: INFO: At 2023-06-19 13:29:30 +0000 UTC - event for server-pllpz: {multus } AddedInterface: Add eth0 [fd00::472/128 10.128.9.23/32] from cilium Jun 19 13:31:29.781: INFO: At 2023-06-19 13:29:31 +0000 UTC - event for server-pllpz: {kubelet worker02} Created: Created container server-container-80 Jun 19 13:31:29.781: INFO: At 2023-06-19 13:29:31 +0000 UTC - event for server-pllpz: {kubelet worker02} Started: Started container server-container-80 Jun 19 13:31:29.781: INFO: At 2023-06-19 13:29:31 +0000 UTC - event for server-pllpz: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 13:31:29.781: INFO: At 2023-06-19 13:29:31 +0000 UTC - event for server-pllpz: {kubelet worker02} Started: Started container server-container-81 Jun 19 13:31:29.781: INFO: At 2023-06-19 13:29:31 +0000 UTC - event for server-pllpz: {kubelet worker02} Created: Created container server-container-81 Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:33 +0000 UTC - event for client-can-connect-80-vsqpt: {kubelet worker03} Started: Started container client Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:33 +0000 UTC - event for client-can-connect-80-vsqpt: {multus } AddedInterface: Add eth0 [fd00::513/128 10.128.10.170/32] from cilium Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:33 +0000 UTC - event for client-can-connect-80-vsqpt: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:33 +0000 UTC - event for client-can-connect-80-vsqpt: {kubelet worker03} Created: Created container client Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:39 +0000 UTC - event for client-can-connect-81-2n9qd: {kubelet worker03} Created: Created container client Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:39 +0000 UTC - event for client-can-connect-81-2n9qd: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:39 +0000 UTC - event for client-can-connect-81-2n9qd: {kubelet worker03} Started: Started container client Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:39 +0000 UTC - event for client-can-connect-81-2n9qd: {multus } AddedInterface: Add eth0 [fd00::5c3/128 10.128.10.8/32] from cilium Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:43 +0000 UTC - event for pod-b-ctd6s: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:43 +0000 UTC - event for pod-b-ctd6s: {multus } AddedInterface: Add eth0 [fd00::5da/128 10.128.11.21/32] from cilium Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:43 +0000 UTC - event for pod-b-ctd6s: {kubelet worker03} Created: Created container pod-b-container-80 Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:43 +0000 UTC - event for pod-b-ctd6s: {kubelet worker03} Started: Started container pod-b-container-80 Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:45 +0000 UTC - event for client-a-lmxrn: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:45 +0000 UTC - event for client-a-lmxrn: {kubelet worker01} Started: Started container client Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:45 +0000 UTC - event for client-a-lmxrn: {kubelet worker01} Created: Created container client Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:45 +0000 UTC - event for client-a-lmxrn: {multus } AddedInterface: Add eth0 [fd00::368/128 10.128.6.202/32] from cilium Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:49 +0000 UTC - event for client-a-grdn2: {kubelet worker01} Created: Created container client Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:49 +0000 UTC - event for client-a-grdn2: {multus } AddedInterface: Add eth0 [fd00::347/128 10.128.6.245/32] from cilium Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:49 +0000 UTC - event for client-a-grdn2: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 13:31:29.782: INFO: At 2023-06-19 13:29:49 +0000 UTC - event for client-a-grdn2: {kubelet worker01} Started: Started container client Jun 19 13:31:29.782: INFO: At 2023-06-19 13:30:39 +0000 UTC - event for client-a-zq9s9: {kubelet worker03} Started: Started container client Jun 19 13:31:29.782: INFO: At 2023-06-19 13:30:39 +0000 UTC - event for client-a-zq9s9: {kubelet worker03} Created: Created container client Jun 19 13:31:29.782: INFO: At 2023-06-19 13:30:39 +0000 UTC - event for client-a-zq9s9: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine Jun 19 13:31:29.782: INFO: At 2023-06-19 13:30:39 +0000 UTC - event for client-a-zq9s9: {multus } AddedInterface: Add eth0 [fd00::5fd/128 10.128.10.17/32] from cilium Jun 19 13:31:29.782: INFO: At 2023-06-19 13:31:29 +0000 UTC - event for pod-b-ctd6s: {kubelet worker03} Killing: Stopping container pod-b-container-80 Jun 19 13:31:29.782: INFO: At 2023-06-19 13:31:29 +0000 UTC - event for server-pllpz: {kubelet worker02} Killing: Stopping container server-container-80 Jun 19 13:31:29.782: INFO: At 2023-06-19 13:31:29 +0000 UTC - event for server-pllpz: {kubelet worker02} Killing: Stopping container server-container-81 Jun 19 13:31:29.790: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 13:31:29.791: INFO: pod-b-ctd6s worker03 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 13:29:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 13:29:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 13:29:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 13:29:42 +0000 UTC }] Jun 19 13:31:29.791: INFO: server-pllpz worker02 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 13:29:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 13:29:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 13:29:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-06-19 13:29:30 +0000 UTC }] Jun 19 13:31:29.791: INFO: Jun 19 13:31:29.803: INFO: skipping dumping cluster info - cluster too large [DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly] tear down framework | framework.go:193 STEP: Destroying namespace "e2e-network-policy-754" for this suite. 06/19/23 13:31:29.803 fail [k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941]: Jun 19 13:31:29.583: Pod client-a-zq9s9 should be able to connect to service svc-server, but was not able to connect. Pod logs: TIMEOUT TIMEOUT TIMEOUT TIMEOUT TIMEOUT Current NetworkPolicies: [{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-754 f4f16437-77b1-4fae-ad0c-03bf43cb20eb 78151 1 2023-06-19 13:29:48 +0000 UTC map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-06-19 13:29:48 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.9.23/32,Except:[],}}]}] [Egress]} {[]}}] Pods: [Pod: client-a-zq9s9, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:30:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:31:25 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:31:25 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:30:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.10.17,StartTime:2023-06-19 13:30:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-06-19 13:30:39 +0000 UTC,FinishedAt:2023-06-19 13:31:24 +0000 UTC,ContainerID:cri-o://fcab7edb76fab13d82840d6a201fd5fa4e9136602ddc7d1c2fb4d205d6999135,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://fcab7edb76fab13d82840d6a201fd5fa4e9136602ddc7d1c2fb4d205d6999135,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.10.17,},PodIP{IP:fd00::5fd,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: pod-b-ctd6s, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.11.21,StartTime:2023-06-19 13:29:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pod-b-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 13:29:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://e60f789faf04a1e185c963dbf7416bce6808e41e9319b8bb411ce56d427239f9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.11.21,},PodIP{IP:fd00::5da,},},EphemeralContainerStatuses:[]ContainerStatus{},} Pod: server-pllpz, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-06-19 13:29:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.9.23,StartTime:2023-06-19 13:29:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 13:29:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://2bfe08e765520741b356872e6dda160dc2923a838ca27a4b4aa8a549f28e39e8,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-06-19 13:29:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://bb855b74255b12ba3134b55e15fa8e613def26f573860941d3e8449e00dc8543,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.9.23,},PodIP{IP:fd00::472,},},EphemeralContainerStatuses:[]ContainerStatus{},} ] Ginkgo exit error 1: exit with code 1 failed: (2m1s) 2023-06-19T13:31:29 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (3m7s) 2023-06-19T13:31:33 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" passed: (3m40s) 2023-06-19T13:32:19 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]" started: 2/67/67 "[sig-network] Service endpoints latency should not be very high [Conformance] [Serial] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]" passed: (12.3s) 2023-06-19T13:32:31 "[sig-network] Service endpoints latency should not be very high [Conformance] [Serial] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]" Failing tests: [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s] [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s] error: 2 fail, 65 pass, 0 skip (7m2s) ```