Closed thorn3r closed 1 year ago
@nathanjsweet sorry should've mentioned here that there was another PR to address that: https://github.com/isovalent/olm-for-cilium/pull/2. thats merged now, so i'll rebase and push again
Build Images is failing with:
Run docker/login-action@f4ef78c080cd8ba55a85445d5b36e214a81df20a
with:
registry: quay.io
ecr: auto
logout: true
env:
PREFLIGHT_VERSION: 1.[2](https://github.com/isovalent/olm-for-cilium/actions/runs/5048845768/jobs/9057547916?pr=1#step:5:2).1
PFLT_DOCKERCONFIG: ~/.docker/config.json
VERSION: v1.11.1[7](https://github.com/isovalent/olm-for-cilium/actions/runs/5048845768/jobs/9057547916?pr=1#step:5:7)
Error: Username and password required
It's currently using:
with:
registry: quay.io
username: ${{ secrets.QUAY_ISOVALENT_DEV_USERNAME }}
password: ${{ secrets.QUAY_ISOVALENT_DEV_PASSWORD }}
I'm unable to view the secrets in this repo (or org) to confirm they're there
CI is passing now, going to merge
test results (went gung-ho on the merge earlier)
started: 0/1/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/2/67 "[sig-network] Services should delete a collection of services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/3/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/4/67 "[sig-network] Services should serve multiport endpoints from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/5/67 "[sig-network] Services should complete a service status lifecycle [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/6/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/7/67 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/8/67 "[sig-network] Ingress API should support creating Ingress API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/9/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/10/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (2.3s) 2023-05-22T21:25:13 "[sig-network] Ingress API should support creating Ingress API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/11/67 "[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (2.4s) 2023-05-22T21:25:13 "[sig-network] Services should delete a collection of services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/12/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (2.4s) 2023-05-22T21:25:13 "[sig-network] Services should complete a service status lifecycle [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/13/67 "[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (10.4s) 2023-05-22T21:25:23 "[sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/14/67 "[sig-network] DNS should support configurable pod DNS nameservers [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (20.4s) 2023-05-22T21:25:31 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/15/67 "[sig-network] IngressClass API should support creating IngressClass API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (8.2s) 2023-05-22T21:25:31 "[sig-network] DNS should support configurable pod DNS nameservers [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/16/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1.4s) 2023-05-22T21:25:32 "[sig-network] IngressClass API should support creating IngressClass API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/17/67 "[sig-network] Services should provide secure master service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1.3s) 2023-05-22T21:25:33 "[sig-network] Services should provide secure master service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/18/67 "[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (29.5s) 2023-05-22T21:25:40 "[sig-network] Services should serve multiport endpoints from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/19/67 "[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (7.5s) 2023-05-22T21:25:47 "[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/20/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (36.5s) 2023-05-22T21:26:10 "[sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/21/67 "[sig-network] NetworkPolicy API should support creating NetworkPolicy API operations [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (57.5s) 2023-05-22T21:26:10 "[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/22/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1.5s) 2023-05-22T21:26:11 "[sig-network] NetworkPolicy API should support creating NetworkPolicy API operations [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/23/67 "[sig-network] DNS should provide DNS for pods for Subdomain [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (18.4s) 2023-05-22T21:26:30 "[sig-network] DNS should provide DNS for pods for Subdomain [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/24/67 "[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1m22s) 2023-05-22T21:26:33 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/25/67 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (3.2s) 2023-05-22T21:26:33 "[sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/26/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m25s) 2023-05-22T21:26:36 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/27/67 "[sig-network] Services should find a service from listing all namespaces [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1.3s) 2023-05-22T21:26:37 "[sig-network] Services should find a service from listing all namespaces [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/28/67 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1m28s) 2023-05-22T21:26:39 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/29/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m31s) 2023-05-22T21:26:41 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/30/67 "[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] [Serial:Self] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1m45s) 2023-05-22T21:26:55 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/31/67 "[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (16.7s) 2023-05-22T21:26:58 "[sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] [Serial:Self] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/32/67 "[sig-network] Services should be able to create a functioning NodePort service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (25.7s) 2023-05-22T21:26:58 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/33/67 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (25.6s) 2023-05-22T21:27:03 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/34/67 "[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (11.7s) 2023-05-22T21:27:07 "[sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/35/67 "[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (10.5s) 2023-05-22T21:27:09 "[sig-network] Services should be able to create a functioning NodePort service [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/36/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m22s) 2023-05-22T21:27:10 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/37/67 "[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (8.1s) 2023-05-22T21:27:11 "[sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/38/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (9.2s) 2023-05-22T21:27:16 "[sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/39/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m10s) 2023-05-22T21:27:20 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/40/67 "[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (11.2s) 2023-05-22T21:27:21 "[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/41/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (25.9s) 2023-05-22T21:27:24 "[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/42/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (21.8s) 2023-05-22T21:27:32 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/43/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (2m20s) 2023-05-22T21:27:33 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 0/44/67 "[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (14.1s) 2023-05-22T21:27:34 "[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 0/45/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
May 22 21:25:32.156: INFO: Enabling in-tree volume drivers
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/test.go:58
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
set up framework | framework.go:178
STEP: Creating a kubernetes client 05/22/23 21:25:32.971
STEP: Building a namespace api object, basename network-policy 05/22/23 21:25:32.974
May 22 21:25:33.038: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace 05/22/23 21:25:33.205
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/22/23 21:25:33.209
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:31
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:72
[BeforeEach] NetworkPolicy between server and client
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:78
STEP: Creating a simple server that serves on port 80 and 81. 05/22/23 21:25:33.214
STEP: Creating a server pod server in namespace e2e-network-policy-8631 05/22/23 21:25:33.214
W0522 21:25:33.239526 720 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "server-container-80", "server-container-81" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "server-container-80", "server-container-81" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "server-container-80", "server-container-81" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "server-container-80", "server-container-81" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:25:33.239: INFO: Created pod server-rndvg
STEP: Creating a service svc-server for pod server in namespace e2e-network-policy-8631 05/22/23 21:25:33.239
May 22 21:25:33.277: INFO: Created service svc-server
STEP: Waiting for pod ready 05/22/23 21:25:33.277
May 22 21:25:33.277: INFO: Waiting up to 5m0s for pod "server-rndvg" in namespace "e2e-network-policy-8631" to be "running and ready"
May 22 21:25:33.293: INFO: Pod "server-rndvg": Phase="Pending", Reason="", readiness=false. Elapsed: 16.017441ms
May 22 21:25:33.293: INFO: The phase of Pod server-rndvg is Pending, waiting for it to be Running (with Ready = true)
May 22 21:25:35.308: INFO: Pod "server-rndvg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030623068s
May 22 21:25:35.308: INFO: The phase of Pod server-rndvg is Pending, waiting for it to be Running (with Ready = true)
May 22 21:25:37.301: INFO: Pod "server-rndvg": Phase="Running", Reason="", readiness=true. Elapsed: 4.023370162s
May 22 21:25:37.301: INFO: The phase of Pod server-rndvg is Running (Ready = true)
May 22 21:25:37.301: INFO: Pod "server-rndvg" satisfied condition "running and ready"
STEP: Testing pods can connect to both ports when no policy is present. 05/22/23 21:25:37.301
STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. 05/22/23 21:25:37.301
W0522 21:25:37.313080 720 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:25:37.313: INFO: Waiting for client-can-connect-80-vszz7 to complete.
May 22 21:25:37.313: INFO: Waiting up to 3m0s for pod "client-can-connect-80-vszz7" in namespace "e2e-network-policy-8631" to be "completed"
May 22 21:25:37.318: INFO: Pod "client-can-connect-80-vszz7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.390054ms
May 22 21:25:39.325: INFO: Pod "client-can-connect-80-vszz7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012731101s
May 22 21:25:41.328: INFO: Pod "client-can-connect-80-vszz7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014971526s
May 22 21:25:43.324: INFO: Pod "client-can-connect-80-vszz7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011291982s
May 22 21:25:43.324: INFO: Pod "client-can-connect-80-vszz7" satisfied condition "completed"
May 22 21:25:43.324: INFO: Waiting for client-can-connect-80-vszz7 to complete.
May 22 21:25:43.324: INFO: Waiting up to 5m0s for pod "client-can-connect-80-vszz7" in namespace "e2e-network-policy-8631" to be "Succeeded or Failed"
May 22 21:25:43.328: INFO: Pod "client-can-connect-80-vszz7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.167916ms
STEP: Saw pod success 05/22/23 21:25:43.328
May 22 21:25:43.328: INFO: Pod "client-can-connect-80-vszz7" satisfied condition "Succeeded or Failed"
STEP: Cleaning up the pod client-can-connect-80-vszz7 05/22/23 21:25:43.328
STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. 05/22/23 21:25:43.35
W0522 21:25:43.361791 720 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:25:43.361: INFO: Waiting for client-can-connect-81-dbg6z to complete.
May 22 21:25:43.361: INFO: Waiting up to 3m0s for pod "client-can-connect-81-dbg6z" in namespace "e2e-network-policy-8631" to be "completed"
May 22 21:25:43.367: INFO: Pod "client-can-connect-81-dbg6z": Phase="Pending", Reason="", readiness=false. Elapsed: 5.638065ms
May 22 21:25:45.371: INFO: Pod "client-can-connect-81-dbg6z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009442518s
May 22 21:25:47.373: INFO: Pod "client-can-connect-81-dbg6z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011413127s
May 22 21:25:47.373: INFO: Pod "client-can-connect-81-dbg6z" satisfied condition "completed"
May 22 21:25:47.373: INFO: Waiting for client-can-connect-81-dbg6z to complete.
May 22 21:25:47.373: INFO: Waiting up to 5m0s for pod "client-can-connect-81-dbg6z" in namespace "e2e-network-policy-8631" to be "Succeeded or Failed"
May 22 21:25:47.380: INFO: Pod "client-can-connect-81-dbg6z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.918312ms
STEP: Saw pod success 05/22/23 21:25:47.38
May 22 21:25:47.380: INFO: Pod "client-can-connect-81-dbg6z" satisfied condition "Succeeded or Failed"
STEP: Cleaning up the pod client-can-connect-81-dbg6z 05/22/23 21:25:47.38
[It] should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1343
STEP: Creating a server pod pod-b in namespace e2e-network-policy-8631 05/22/23 21:25:47.404
W0522 21:25:47.413988 720 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "pod-b-container-80" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "pod-b-container-80" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "pod-b-container-80" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "pod-b-container-80" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:25:47.414: INFO: Created pod pod-b-4dxfr
STEP: Creating a service svc-pod-b for pod pod-b in namespace e2e-network-policy-8631 05/22/23 21:25:47.414
May 22 21:25:47.456: INFO: Created service svc-pod-b
STEP: Waiting for pod-b to be ready 05/22/23 21:25:47.456
May 22 21:25:47.456: INFO: Waiting up to 5m0s for pod "pod-b-4dxfr" in namespace "e2e-network-policy-8631" to be "running and ready"
May 22 21:25:47.466: INFO: Pod "pod-b-4dxfr": Phase="Pending", Reason="", readiness=false. Elapsed: 9.37357ms
May 22 21:25:47.466: INFO: The phase of Pod pod-b-4dxfr is Pending, waiting for it to be Running (with Ready = true)
May 22 21:25:49.473: INFO: Pod "pod-b-4dxfr": Phase="Running", Reason="", readiness=true. Elapsed: 2.016600626s
May 22 21:25:49.473: INFO: The phase of Pod pod-b-4dxfr is Running (Ready = true)
May 22 21:25:49.473: INFO: Pod "pod-b-4dxfr" satisfied condition "running and ready"
May 22 21:25:49.473: INFO: Waiting up to 5m0s for pod "pod-b-4dxfr" in namespace "e2e-network-policy-8631" to be "running"
May 22 21:25:49.480: INFO: Pod "pod-b-4dxfr": Phase="Running", Reason="", readiness=true. Elapsed: 6.67851ms
May 22 21:25:49.480: INFO: Pod "pod-b-4dxfr" satisfied condition "running"
STEP: Creating client-a which should be able to contact the server-b. 05/22/23 21:25:49.48
STEP: Creating client pod client-a that should successfully connect to svc-pod-b. 05/22/23 21:25:49.48
W0522 21:25:49.491232 720 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:25:49.491: INFO: Waiting for client-a-hhtwp to complete.
May 22 21:25:49.491: INFO: Waiting up to 3m0s for pod "client-a-hhtwp" in namespace "e2e-network-policy-8631" to be "completed"
May 22 21:25:49.496: INFO: Pod "client-a-hhtwp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.750439ms
May 22 21:25:51.501: INFO: Pod "client-a-hhtwp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010407089s
May 22 21:25:53.502: INFO: Pod "client-a-hhtwp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011433976s
May 22 21:25:55.503: INFO: Pod "client-a-hhtwp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012075234s
May 22 21:25:55.503: INFO: Pod "client-a-hhtwp" satisfied condition "completed"
May 22 21:25:55.503: INFO: Waiting for client-a-hhtwp to complete.
May 22 21:25:55.503: INFO: Waiting up to 5m0s for pod "client-a-hhtwp" in namespace "e2e-network-policy-8631" to be "Succeeded or Failed"
May 22 21:25:55.506: INFO: Pod "client-a-hhtwp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.345549ms
STEP: Saw pod success 05/22/23 21:25:55.506
May 22 21:25:55.506: INFO: Pod "client-a-hhtwp" satisfied condition "Succeeded or Failed"
STEP: Cleaning up the pod client-a-hhtwp 05/22/23 21:25:55.506
STEP: Creating client-a which should not be able to contact the server-b. 05/22/23 21:25:55.533
STEP: Creating client pod client-a that should not be able to connect to svc-pod-b. 05/22/23 21:25:55.533
W0522 21:25:55.541640 720 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:25:55.541: INFO: Waiting for client-a-qvxvh to complete.
May 22 21:25:55.541: INFO: Waiting up to 5m0s for pod "client-a-qvxvh" in namespace "e2e-network-policy-8631" to be "Succeeded or Failed"
May 22 21:25:55.548: INFO: Pod "client-a-qvxvh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.552757ms
May 22 21:25:57.750: INFO: Pod "client-a-qvxvh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209037209s
May 22 21:25:59.554: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 4.012924272s
May 22 21:26:01.555: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 6.013502691s
May 22 21:26:03.554: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 8.012273897s
May 22 21:26:05.555: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 10.014182738s
May 22 21:26:07.553: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 12.011869567s
May 22 21:26:09.555: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 14.01341663s
May 22 21:26:11.557: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 16.015697497s
May 22 21:26:13.557: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 18.01552096s
May 22 21:26:15.555: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 20.014120244s
May 22 21:26:17.553: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 22.011578228s
May 22 21:26:19.555: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 24.013860435s
May 22 21:26:21.554: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 26.012361909s
May 22 21:26:23.556: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 28.014337445s
May 22 21:26:25.556: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 30.014543474s
May 22 21:26:27.554: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 32.012304227s
May 22 21:26:29.555: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 34.01360378s
May 22 21:26:31.555: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 36.013689182s
May 22 21:26:33.553: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 38.011363901s
May 22 21:26:35.556: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 40.014839263s
May 22 21:26:37.554: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 42.012552606s
May 22 21:26:39.554: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 44.012396782s
May 22 21:26:41.554: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 46.012886622s
May 22 21:26:43.554: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=true. Elapsed: 48.012812703s
May 22 21:26:45.561: INFO: Pod "client-a-qvxvh": Phase="Running", Reason="", readiness=false. Elapsed: 50.020205867s
May 22 21:26:47.553: INFO: Pod "client-a-qvxvh": Phase="Failed", Reason="", readiness=false. Elapsed: 52.011414977s
STEP: Cleaning up the pod client-a-qvxvh 05/22/23 21:26:47.553
STEP: Creating client-a which should be able to contact the server. 05/22/23 21:26:47.572
STEP: Creating client pod client-a that should successfully connect to svc-server. 05/22/23 21:26:47.572
W0522 21:26:47.587732 720 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:26:47.587: INFO: Waiting for client-a-dgvqb to complete.
May 22 21:26:47.587: INFO: Waiting up to 3m0s for pod "client-a-dgvqb" in namespace "e2e-network-policy-8631" to be "completed"
May 22 21:26:47.591: INFO: Pod "client-a-dgvqb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.489712ms
May 22 21:26:49.596: INFO: Pod "client-a-dgvqb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008878635s
May 22 21:26:51.597: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 4.009532213s
May 22 21:26:53.597: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 6.009214737s
May 22 21:26:55.605: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 8.017577542s
May 22 21:26:57.596: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 10.008627225s
May 22 21:26:59.598: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 12.01062849s
May 22 21:27:01.598: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 14.010408545s
May 22 21:27:03.596: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 16.008444657s
May 22 21:27:05.598: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 18.010696222s
May 22 21:27:07.596: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 20.008439181s
May 22 21:27:09.596: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 22.008835726s
May 22 21:27:11.601: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 24.01322804s
May 22 21:27:13.597: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 26.01001849s
May 22 21:27:15.599: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 28.011390231s
May 22 21:27:17.596: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 30.008397543s
May 22 21:27:19.596: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 32.008831027s
May 22 21:27:21.606: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 34.018256309s
May 22 21:27:23.598: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 36.011037758s
May 22 21:27:25.596: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 38.008836293s
May 22 21:27:27.601: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 40.013510561s
May 22 21:27:29.597: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 42.009507256s
May 22 21:27:31.598: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 44.010596862s
May 22 21:27:33.595: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=true. Elapsed: 46.007894928s
May 22 21:27:35.600: INFO: Pod "client-a-dgvqb": Phase="Running", Reason="", readiness=false. Elapsed: 48.012139944s
May 22 21:27:37.596: INFO: Pod "client-a-dgvqb": Phase="Failed", Reason="", readiness=false. Elapsed: 50.00888935s
May 22 21:27:37.596: INFO: Pod "client-a-dgvqb" satisfied condition "completed"
May 22 21:27:37.596: INFO: Waiting for client-a-dgvqb to complete.
May 22 21:27:37.596: INFO: Waiting up to 5m0s for pod "client-a-dgvqb" in namespace "e2e-network-policy-8631" to be "Succeeded or Failed"
May 22 21:27:37.605: INFO: Pod "client-a-dgvqb": Phase="Failed", Reason="", readiness=false. Elapsed: 8.211321ms
May 22 21:27:37.613: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-8631 describe po client-a-dgvqb'
May 22 21:27:37.774: INFO: stderr: ""
May 22 21:27:37.774: INFO: stdout: "Name: client-a-dgvqb\nNamespace: e2e-network-policy-8631\nPriority: 0\nService Account: default\nNode: worker02/192.168.200.32\nStart Time: Mon, 22 May 2023 21:26:47 +0000\nLabels: pod-name=client-a\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::4c6\",\n \"10.128.9.185\"\n ],\n \"mac\": \"76:aa:e8:ab:18:f7\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::4c6\",\n \"10.128.9.185\"\n ],\n \"mac\": \"76:aa:e8:ab:18:f7\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Failed\nIP: 10.128.9.185\nIPs:\n IP: 10.128.9.185\n IP: fd00::4c6\nContainers:\n client:\n Container ID: cri-o://cc8f058d207e6324f863ce81be40641761f842adb7cf1751c140c54c776cad91\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: <none>\n Host Port: <none>\n Command:\n /bin/sh\n Args:\n -c\n for i in $(seq 1 5); do /agnhost connect 172.30.246.34:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1\n State: Terminated\n Reason: Error\n Exit Code: 1\n Started: Mon, 22 May 2023 21:26:48 +0000\n Finished: Mon, 22 May 2023 21:27:33 +0000\n Ready: False\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f4f7n (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-f4f7n:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: <nil>\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-8631/client-a-dgvqb to worker02 by cp01\n Normal AddedInterface 49s multus Add eth0 [fd00::4c6/128 10.128.9.185/32] from cilium\n Normal Pulled 49s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 49s kubelet Created container client\n Normal Started 49s kubelet Started container client\n"
May 22 21:27:37.774: INFO:
Output of kubectl describe client-a-dgvqb:
Name: client-a-dgvqb
Namespace: e2e-network-policy-8631
Priority: 0
Service Account: default
Node: worker02/192.168.200.32
Start Time: Mon, 22 May 2023 21:26:47 +0000
Labels: pod-name=client-a
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "cilium",
"interface": "eth0",
"ips": [
"fd00::4c6",
"10.128.9.185"
],
"mac": "76:aa:e8:ab:18:f7",
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "cilium",
"interface": "eth0",
"ips": [
"fd00::4c6",
"10.128.9.185"
],
"mac": "76:aa:e8:ab:18:f7",
"default": true,
"dns": {}
}]
Status: Failed
IP: 10.128.9.185
IPs:
IP: 10.128.9.185
IP: fd00::4c6
Containers:
client:
Container ID: cri-o://cc8f058d207e6324f863ce81be40641761f842adb7cf1751c140c54c776cad91
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
for i in $(seq 1 5); do /agnhost connect 172.30.246.34:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1
State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 22 May 2023 21:26:48 +0000
Finished: Mon, 22 May 2023 21:27:33 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f4f7n (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-f4f7n:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional: <nil>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-8631/client-a-dgvqb to worker02 by cp01
Normal AddedInterface 49s multus Add eth0 [fd00::4c6/128 10.128.9.185/32] from cilium
Normal Pulled 49s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 49s kubelet Created container client
Normal Started 49s kubelet Started container client
May 22 21:27:37.774: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-8631 logs client-a-dgvqb --tail=100'
May 22 21:27:37.951: INFO: stderr: ""
May 22 21:27:37.951: INFO: stdout: "TIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\n"
May 22 21:27:37.951: INFO:
Last 100 log lines of client-a-dgvqb:
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
May 22 21:27:37.951: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-8631 describe po pod-b-4dxfr'
May 22 21:27:38.097: INFO: stderr: ""
May 22 21:27:38.097: INFO: stdout: "Name: pod-b-4dxfr\nNamespace: e2e-network-policy-8631\nPriority: 0\nService Account: default\nNode: worker01/192.168.200.31\nStart Time: Mon, 22 May 2023 21:25:47 +0000\nLabels: pod-name=pod-b\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::5d1\",\n \"10.128.10.65\"\n ],\n \"mac\": \"ee:0c:7e:bd:3b:af\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::5d1\",\n \"10.128.10.65\"\n ],\n \"mac\": \"ee:0c:7e:bd:3b:af\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.10.65\nIPs:\n IP: 10.128.10.65\n IP: fd00::5d1\nContainers:\n pod-b-container-80:\n Container ID: cri-o://a372c991b80899a4cecc525162e2ce71eb9b3d804e7538ad66d6617a3fcb8d44\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 22 May 2023 21:25:48 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dmxx4 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-dmxx4:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: <nil>\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 110s default-scheduler Successfully assigned e2e-network-policy-8631/pod-b-4dxfr to worker01 by cp01\n Normal AddedInterface 110s multus Add eth0 [fd00::5d1/128 10.128.10.65/32] from cilium\n Normal Pulled 110s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 110s kubelet Created container pod-b-container-80\n Normal Started 110s kubelet Started container pod-b-container-80\n"
May 22 21:27:38.097: INFO:
Output of kubectl describe pod-b-4dxfr:
Name: pod-b-4dxfr
Namespace: e2e-network-policy-8631
Priority: 0
Service Account: default
Node: worker01/192.168.200.31
Start Time: Mon, 22 May 2023 21:25:47 +0000
Labels: pod-name=pod-b
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "cilium",
"interface": "eth0",
"ips": [
"fd00::5d1",
"10.128.10.65"
],
"mac": "ee:0c:7e:bd:3b:af",
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "cilium",
"interface": "eth0",
"ips": [
"fd00::5d1",
"10.128.10.65"
],
"mac": "ee:0c:7e:bd:3b:af",
"default": true,
"dns": {}
}]
Status: Running
IP: 10.128.10.65
IPs:
IP: 10.128.10.65
IP: fd00::5d1
Containers:
pod-b-container-80:
Container ID: cri-o://a372c991b80899a4cecc525162e2ce71eb9b3d804e7538ad66d6617a3fcb8d44
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: 80/TCP
Host Port: 0/TCP
Args:
porter
State: Running
Started: Mon, 22 May 2023 21:25:48 +0000
Ready: True
Restart Count: 0
Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVE_PORT_80: foo
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dmxx4 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-dmxx4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional: <nil>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 110s default-scheduler Successfully assigned e2e-network-policy-8631/pod-b-4dxfr to worker01 by cp01
Normal AddedInterface 110s multus Add eth0 [fd00::5d1/128 10.128.10.65/32] from cilium
Normal Pulled 110s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 110s kubelet Created container pod-b-container-80
Normal Started 110s kubelet Started container pod-b-container-80
May 22 21:27:38.097: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-8631 logs pod-b-4dxfr --tail=100'
May 22 21:27:38.250: INFO: stderr: ""
May 22 21:27:38.250: INFO: stdout: ""
May 22 21:27:38.250: INFO:
Last 100 log lines of pod-b-4dxfr:
May 22 21:27:38.250: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-8631 describe po server-rndvg'
May 22 21:27:38.412: INFO: stderr: ""
May 22 21:27:38.412: INFO: stdout: "Name: server-rndvg\nNamespace: e2e-network-policy-8631\nPriority: 0\nService Account: default\nNode: worker03/192.168.200.33\nStart Time: Mon, 22 May 2023 21:25:33 +0000\nLabels: pod-name=server\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::39f\",\n \"10.128.6.150\"\n ],\n \"mac\": \"f6:a3:97:a9:8a:66\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::39f\",\n \"10.128.6.150\"\n ],\n \"mac\": \"f6:a3:97:a9:8a:66\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.6.150\nIPs:\n IP: 10.128.6.150\n IP: fd00::39f\nContainers:\n server-container-80:\n Container ID: cri-o://c65040d90898239c18ad39a245282a188478288afc36e5be821207d617b6ad90\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 22 May 2023 21:25:34 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5v56f (ro)\n server-container-81:\n Container ID: cri-o://d28454caaa5231a2a93bdbb662958682280f148bf59635926355eb4e1ca27a3c\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 81/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 22 May 2023 21:25:35 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_81: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5v56f (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-5v56f:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: <nil>\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2m5s default-scheduler Successfully assigned e2e-network-policy-8631/server-rndvg to worker03 by cp01\n Normal AddedInterface 2m4s multus Add eth0 [fd00::39f/128 10.128.6.150/32] from cilium\n Normal Pulled 2m4s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 2m4s kubelet Created container server-container-80\n Normal Started 2m4s kubelet Started container server-container-80\n Normal Pulled 2m4s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 2m3s kubelet Created container server-container-81\n Normal Started 2m3s kubelet Started container server-container-81\n"
May 22 21:27:38.412: INFO:
Output of kubectl describe server-rndvg:
Name: server-rndvg
Namespace: e2e-network-policy-8631
Priority: 0
Service Account: default
Node: worker03/192.168.200.33
Start Time: Mon, 22 May 2023 21:25:33 +0000
Labels: pod-name=server
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "cilium",
"interface": "eth0",
"ips": [
"fd00::39f",
"10.128.6.150"
],
"mac": "f6:a3:97:a9:8a:66",
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "cilium",
"interface": "eth0",
"ips": [
"fd00::39f",
"10.128.6.150"
],
"mac": "f6:a3:97:a9:8a:66",
"default": true,
"dns": {}
}]
Status: Running
IP: 10.128.6.150
IPs:
IP: 10.128.6.150
IP: fd00::39f
Containers:
server-container-80:
Container ID: cri-o://c65040d90898239c18ad39a245282a188478288afc36e5be821207d617b6ad90
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: 80/TCP
Host Port: 0/TCP
Args:
porter
State: Running
Started: Mon, 22 May 2023 21:25:34 +0000
Ready: True
Restart Count: 0
Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVE_PORT_80: foo
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5v56f (ro)
server-container-81:
Container ID: cri-o://d28454caaa5231a2a93bdbb662958682280f148bf59635926355eb4e1ca27a3c
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: 81/TCP
Host Port: 0/TCP
Args:
porter
State: Running
Started: Mon, 22 May 2023 21:25:35 +0000
Ready: True
Restart Count: 0
Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVE_PORT_81: foo
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5v56f (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-5v56f:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional: <nil>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m5s default-scheduler Successfully assigned e2e-network-policy-8631/server-rndvg to worker03 by cp01
Normal AddedInterface 2m4s multus Add eth0 [fd00::39f/128 10.128.6.150/32] from cilium
Normal Pulled 2m4s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 2m4s kubelet Created container server-container-80
Normal Started 2m4s kubelet Started container server-container-80
Normal Pulled 2m4s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 2m3s kubelet Created container server-container-81
Normal Started 2m3s kubelet Started container server-container-81
May 22 21:27:38.412: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-8631 logs server-rndvg --tail=100'
May 22 21:27:38.584: INFO: stderr: "Defaulted container \"server-container-80\" out of: server-container-80, server-container-81\n"
May 22 21:27:38.584: INFO: stdout: ""
May 22 21:27:38.584: INFO:
Last 100 log lines of server-rndvg:
May 22 21:27:38.605: FAIL: Pod client-a-dgvqb should be able to connect to service svc-server, but was not able to connect.
Pod logs:
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
Current NetworkPolicies:
[{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-8631 3590de1d-2e1e-49c9-b990-6c58839551c1 66829 1 2023-05-22 21:25:55 +0000 UTC <nil> <nil> map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-05-22 21:25:55 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.6.150/32,Except:[],}}]}] [Egress]} {[]}}]
Pods:
[Pod: client-a-dgvqb, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:26:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:34 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:34 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:26:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.9.185,StartTime:2023-05-22 21:26:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-05-22 21:26:48 +0000 UTC,FinishedAt:2023-05-22 21:27:33 +0000 UTC,ContainerID:cri-o://cc8f058d207e6324f863ce81be40641761f842adb7cf1751c140c54c776cad91,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://cc8f058d207e6324f863ce81be40641761f842adb7cf1751c140c54c776cad91,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.9.185,},PodIP{IP:fd00::4c6,},},EphemeralContainerStatuses:[]ContainerStatus{},}
Pod: pod-b-4dxfr, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.10.65,StartTime:2023-05-22 21:25:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pod-b-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-05-22 21:25:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://a372c991b80899a4cecc525162e2ce71eb9b3d804e7538ad66d6617a3fcb8d44,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.10.65,},PodIP{IP:fd00::5d1,},},EphemeralContainerStatuses:[]ContainerStatus{},}
Pod: server-rndvg, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.6.150,StartTime:2023-05-22 21:25:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-05-22 21:25:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://c65040d90898239c18ad39a245282a188478288afc36e5be821207d617b6ad90,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-05-22 21:25:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://d28454caaa5231a2a93bdbb662958682280f148bf59635926355eb4e1ca27a3c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.6.150,},PodIP{IP:fd00::39f,},},EphemeralContainerStatuses:[]ContainerStatus{},}
]
Full Stack Trace
k8s.io/kubernetes/test/e2e/network/netpol.checkConnectivity(0xc00125ba40, 0xc0017a6580, 0xc007d3a000, 0xc007b36c80)
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941 +0x355
k8s.io/kubernetes/test/e2e/network/netpol.testCanConnectProtocol(0xc00125ba40, 0xc0017a6580, {0x8a31d3a, 0x8}, 0xc007b36c80, 0xc0024859d0?, {0x8a2370a, 0x3})
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1913 +0x1be
k8s.io/kubernetes/test/e2e/network/netpol.testCanConnect(...)
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1897
k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.27.4()
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1410 +0x47
github.com/onsi/ginkgo/v2.By({0x8bfee33, 0x3d}, {0xc000fd1e50, 0x1, 0x0?})
github.com/onsi/ginkgo/v2@v2.4.0/core_dsl.go:535 +0x525
k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.27()
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1409 +0x8fc
github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2e8acde, 0xc00113f980})
github.com/onsi/ginkgo/v2@v2.4.0/internal/node.go:449 +0x1b
github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2()
github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:757 +0x98
created by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:745 +0xe3d
STEP: Cleaning up the pod client-a-dgvqb 05/22/23 21:27:38.605
STEP: Cleaning up the policy. 05/22/23 21:27:38.626
STEP: Cleaning up the server. 05/22/23 21:27:38.638
STEP: Cleaning up the server's service. 05/22/23 21:27:38.651
[AfterEach] NetworkPolicy between server and client
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:96
STEP: Cleaning up the server. 05/22/23 21:27:38.71
STEP: Cleaning up the server's service. 05/22/23 21:27:38.735
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:33
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
dump namespaces | framework.go:196
STEP: dump namespace information after failure 05/22/23 21:27:38.796
STEP: Collecting events from namespace "e2e-network-policy-8631". 05/22/23 21:27:38.796
STEP: Found 41 events. 05/22/23 21:27:38.803
May 22 21:27:38.803: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-dgvqb: { } Scheduled: Successfully assigned e2e-network-policy-8631/client-a-dgvqb to worker02 by cp01
May 22 21:27:38.803: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-hhtwp: { } Scheduled: Successfully assigned e2e-network-policy-8631/client-a-hhtwp to worker03 by cp01
May 22 21:27:38.803: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-qvxvh: { } Scheduled: Successfully assigned e2e-network-policy-8631/client-a-qvxvh to worker02 by cp01
May 22 21:27:38.803: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-vszz7: { } Scheduled: Successfully assigned e2e-network-policy-8631/client-can-connect-80-vszz7 to worker02 by cp01
May 22 21:27:38.803: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-dbg6z: { } Scheduled: Successfully assigned e2e-network-policy-8631/client-can-connect-81-dbg6z to worker01 by cp01
May 22 21:27:38.803: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-b-4dxfr: { } Scheduled: Successfully assigned e2e-network-policy-8631/pod-b-4dxfr to worker01 by cp01
May 22 21:27:38.803: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-rndvg: { } Scheduled: Successfully assigned e2e-network-policy-8631/server-rndvg to worker03 by cp01
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:34 +0000 UTC - event for server-rndvg: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:34 +0000 UTC - event for server-rndvg: {multus } AddedInterface: Add eth0 [fd00::39f/128 10.128.6.150/32] from cilium
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:34 +0000 UTC - event for server-rndvg: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:34 +0000 UTC - event for server-rndvg: {kubelet worker03} Created: Created container server-container-80
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:34 +0000 UTC - event for server-rndvg: {kubelet worker03} Started: Started container server-container-80
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:35 +0000 UTC - event for server-rndvg: {kubelet worker03} Started: Started container server-container-81
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:35 +0000 UTC - event for server-rndvg: {kubelet worker03} Created: Created container server-container-81
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:38 +0000 UTC - event for client-can-connect-80-vszz7: {kubelet worker02} Created: Created container client
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:38 +0000 UTC - event for client-can-connect-80-vszz7: {kubelet worker02} Started: Started container client
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:38 +0000 UTC - event for client-can-connect-80-vszz7: {multus } AddedInterface: Add eth0 [fd00::411/128 10.128.9.65/32] from cilium
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:38 +0000 UTC - event for client-can-connect-80-vszz7: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:44 +0000 UTC - event for client-can-connect-81-dbg6z: {multus } AddedInterface: Add eth0 [fd00::5ed/128 10.128.10.79/32] from cilium
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:44 +0000 UTC - event for client-can-connect-81-dbg6z: {kubelet worker01} Created: Created container client
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:44 +0000 UTC - event for client-can-connect-81-dbg6z: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:44 +0000 UTC - event for client-can-connect-81-dbg6z: {kubelet worker01} Started: Started container client
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:48 +0000 UTC - event for pod-b-4dxfr: {multus } AddedInterface: Add eth0 [fd00::5d1/128 10.128.10.65/32] from cilium
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:48 +0000 UTC - event for pod-b-4dxfr: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:48 +0000 UTC - event for pod-b-4dxfr: {kubelet worker01} Created: Created container pod-b-container-80
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:48 +0000 UTC - event for pod-b-4dxfr: {kubelet worker01} Started: Started container pod-b-container-80
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:50 +0000 UTC - event for client-a-hhtwp: {kubelet worker03} Started: Started container client
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:50 +0000 UTC - event for client-a-hhtwp: {kubelet worker03} Created: Created container client
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:50 +0000 UTC - event for client-a-hhtwp: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:50 +0000 UTC - event for client-a-hhtwp: {multus } AddedInterface: Add eth0 [fd00::386/128 10.128.6.222/32] from cilium
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:57 +0000 UTC - event for client-a-qvxvh: {kubelet worker02} Started: Started container client
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:57 +0000 UTC - event for client-a-qvxvh: {kubelet worker02} Created: Created container client
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:57 +0000 UTC - event for client-a-qvxvh: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:38.803: INFO: At 2023-05-22 21:25:57 +0000 UTC - event for client-a-qvxvh: {multus } AddedInterface: Add eth0 [fd00::4f8/128 10.128.8.90/32] from cilium
May 22 21:27:38.803: INFO: At 2023-05-22 21:26:48 +0000 UTC - event for client-a-dgvqb: {multus } AddedInterface: Add eth0 [fd00::4c6/128 10.128.9.185/32] from cilium
May 22 21:27:38.803: INFO: At 2023-05-22 21:26:48 +0000 UTC - event for client-a-dgvqb: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:38.803: INFO: At 2023-05-22 21:26:48 +0000 UTC - event for client-a-dgvqb: {kubelet worker02} Started: Started container client
May 22 21:27:38.803: INFO: At 2023-05-22 21:26:48 +0000 UTC - event for client-a-dgvqb: {kubelet worker02} Created: Created container client
May 22 21:27:38.803: INFO: At 2023-05-22 21:27:38 +0000 UTC - event for pod-b-4dxfr: {kubelet worker01} Killing: Stopping container pod-b-container-80
May 22 21:27:38.803: INFO: At 2023-05-22 21:27:38 +0000 UTC - event for server-rndvg: {kubelet worker03} Killing: Stopping container server-container-80
May 22 21:27:38.803: INFO: At 2023-05-22 21:27:38 +0000 UTC - event for server-rndvg: {kubelet worker03} Killing: Stopping container server-container-81
May 22 21:27:38.811: INFO: POD NODE PHASE GRACE CONDITIONS
May 22 21:27:38.811: INFO: pod-b-4dxfr worker01 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:25:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:25:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:25:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:25:47 +0000 UTC }]
May 22 21:27:38.811: INFO: server-rndvg worker03 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:25:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:25:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:25:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:25:33 +0000 UTC }]
May 22 21:27:38.811: INFO:
May 22 21:27:38.824: INFO: skipping dumping cluster info - cluster too large
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
tear down framework | framework.go:193
STEP: Destroying namespace "e2e-network-policy-8631" for this suite. 05/22/23 21:27:38.824
fail [k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941]: May 22 21:27:38.605: Pod client-a-dgvqb should be able to connect to service svc-server, but was not able to connect.
Pod logs:
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
Current NetworkPolicies:
[{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-8631 3590de1d-2e1e-49c9-b990-6c58839551c1 66829 1 2023-05-22 21:25:55 +0000 UTC <nil> <nil> map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-05-22 21:25:55 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.6.150/32,Except:[],}}]}] [Egress]} {[]}}]
Pods:
[Pod: client-a-dgvqb, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:26:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:34 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:34 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:26:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.9.185,StartTime:2023-05-22 21:26:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-05-22 21:26:48 +0000 UTC,FinishedAt:2023-05-22 21:27:33 +0000 UTC,ContainerID:cri-o://cc8f058d207e6324f863ce81be40641761f842adb7cf1751c140c54c776cad91,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://cc8f058d207e6324f863ce81be40641761f842adb7cf1751c140c54c776cad91,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.9.185,},PodIP{IP:fd00::4c6,},},EphemeralContainerStatuses:[]ContainerStatus{},}
Pod: pod-b-4dxfr, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.10.65,StartTime:2023-05-22 21:25:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:pod-b-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-05-22 21:25:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://a372c991b80899a4cecc525162e2ce71eb9b3d804e7538ad66d6617a3fcb8d44,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.10.65,},PodIP{IP:fd00::5d1,},},EphemeralContainerStatuses:[]ContainerStatus{},}
Pod: server-rndvg, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:25:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.33,PodIP:10.128.6.150,StartTime:2023-05-22 21:25:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-05-22 21:25:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://c65040d90898239c18ad39a245282a188478288afc36e5be821207d617b6ad90,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-05-22 21:25:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://d28454caaa5231a2a93bdbb662958682280f148bf59635926355eb4e1ca27a3c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.6.150,},PodIP{IP:fd00::39f,},},EphemeralContainerStatuses:[]ContainerStatus{},}
]
Ginkgo exit error 1: exit with code 1
failed: (2m7s) 2023-05-22T21:27:38 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 1/46/67 "[sig-network] Services should serve a basic endpoint from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
May 22 21:27:16.750: INFO: Enabling in-tree volume drivers
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/test.go:58
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
set up framework | framework.go:178
STEP: Creating a kubernetes client 05/22/23 21:27:17.614
STEP: Building a namespace api object, basename network-policy 05/22/23 21:27:17.616
May 22 21:27:17.676: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace 05/22/23 21:27:17.842
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/22/23 21:27:17.848
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:31
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:72
[BeforeEach] NetworkPolicy between server and client
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:78
STEP: Creating a simple server that serves on port 80 and 81. 05/22/23 21:27:17.853
STEP: Creating a server pod server in namespace e2e-network-policy-7849 05/22/23 21:27:17.853
W0522 21:27:17.870548 2349 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "server-container-80", "server-container-81" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "server-container-80", "server-container-81" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "server-container-80", "server-container-81" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "server-container-80", "server-container-81" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:27:17.870: INFO: Created pod server-wd8cf
STEP: Creating a service svc-server for pod server in namespace e2e-network-policy-7849 05/22/23 21:27:17.87
May 22 21:27:17.906: INFO: Created service svc-server
STEP: Waiting for pod ready 05/22/23 21:27:17.906
May 22 21:27:17.907: INFO: Waiting up to 5m0s for pod "server-wd8cf" in namespace "e2e-network-policy-7849" to be "running and ready"
May 22 21:27:17.912: INFO: Pod "server-wd8cf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.808679ms
May 22 21:27:17.912: INFO: The phase of Pod server-wd8cf is Pending, waiting for it to be Running (with Ready = true)
May 22 21:27:19.919: INFO: Pod "server-wd8cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012185546s
May 22 21:27:19.919: INFO: The phase of Pod server-wd8cf is Pending, waiting for it to be Running (with Ready = true)
May 22 21:27:21.933: INFO: Pod "server-wd8cf": Phase="Running", Reason="", readiness=true. Elapsed: 4.026058446s
May 22 21:27:21.933: INFO: The phase of Pod server-wd8cf is Running (Ready = true)
May 22 21:27:21.933: INFO: Pod "server-wd8cf" satisfied condition "running and ready"
STEP: Testing pods can connect to both ports when no policy is present. 05/22/23 21:27:21.933
STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. 05/22/23 21:27:21.933
W0522 21:27:21.953130 2349 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:27:21.953: INFO: Waiting for client-can-connect-80-f7cth to complete.
May 22 21:27:21.953: INFO: Waiting up to 3m0s for pod "client-can-connect-80-f7cth" in namespace "e2e-network-policy-7849" to be "completed"
May 22 21:27:21.959: INFO: Pod "client-can-connect-80-f7cth": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122807ms
May 22 21:27:23.966: INFO: Pod "client-can-connect-80-f7cth": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013191685s
May 22 21:27:25.972: INFO: Pod "client-can-connect-80-f7cth": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019401715s
May 22 21:27:27.965: INFO: Pod "client-can-connect-80-f7cth": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012504148s
May 22 21:27:27.965: INFO: Pod "client-can-connect-80-f7cth" satisfied condition "completed"
May 22 21:27:27.965: INFO: Waiting for client-can-connect-80-f7cth to complete.
May 22 21:27:27.965: INFO: Waiting up to 5m0s for pod "client-can-connect-80-f7cth" in namespace "e2e-network-policy-7849" to be "Succeeded or Failed"
May 22 21:27:27.972: INFO: Pod "client-can-connect-80-f7cth": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.407895ms
STEP: Saw pod success 05/22/23 21:27:27.972
May 22 21:27:27.972: INFO: Pod "client-can-connect-80-f7cth" satisfied condition "Succeeded or Failed"
STEP: Cleaning up the pod client-can-connect-80-f7cth 05/22/23 21:27:27.972
STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. 05/22/23 21:27:28.002
W0522 21:27:28.015260 2349 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:27:28.015: INFO: Waiting for client-can-connect-81-dlstt to complete.
May 22 21:27:28.015: INFO: Waiting up to 3m0s for pod "client-can-connect-81-dlstt" in namespace "e2e-network-policy-7849" to be "completed"
May 22 21:27:28.020: INFO: Pod "client-can-connect-81-dlstt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.816576ms
May 22 21:27:30.025: INFO: Pod "client-can-connect-81-dlstt": Phase="Running", Reason="", readiness=true. Elapsed: 2.009839253s
May 22 21:27:32.026: INFO: Pod "client-can-connect-81-dlstt": Phase="Running", Reason="", readiness=false. Elapsed: 4.01119166s
May 22 21:27:34.034: INFO: Pod "client-can-connect-81-dlstt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01886987s
May 22 21:27:34.034: INFO: Pod "client-can-connect-81-dlstt" satisfied condition "completed"
May 22 21:27:34.034: INFO: Waiting for client-can-connect-81-dlstt to complete.
May 22 21:27:34.034: INFO: Waiting up to 5m0s for pod "client-can-connect-81-dlstt" in namespace "e2e-network-policy-7849" to be "Succeeded or Failed"
May 22 21:27:34.047: INFO: Pod "client-can-connect-81-dlstt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.145741ms
STEP: Saw pod success 05/22/23 21:27:34.047
May 22 21:27:34.047: INFO: Pod "client-can-connect-81-dlstt" satisfied condition "Succeeded or Failed"
STEP: Cleaning up the pod client-can-connect-81-dlstt 05/22/23 21:27:34.047
[It] should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1689
STEP: getting the state of the sctp module on nodes 05/22/23 21:27:34.092
May 22 21:27:34.108: INFO: Executing cmd "lsmod | grep sctp" on node worker01
W0522 21:27:34.126598 2349 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), privileged (container "agnhost-container" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "rootfs" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:27:34.126: INFO: Waiting up to 5m0s for pod "hostexec-worker01-zgw5r" in namespace "e2e-network-policy-7849" to be "running"
May 22 21:27:34.137: INFO: Pod "hostexec-worker01-zgw5r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.476611ms
May 22 21:27:36.149: INFO: Pod "hostexec-worker01-zgw5r": Phase="Running", Reason="", readiness=true. Elapsed: 2.022547251s
May 22 21:27:36.149: INFO: Pod "hostexec-worker01-zgw5r" satisfied condition "running"
May 22 21:27:36.149: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:e2e-network-policy-7849 PodName:hostexec-worker01-zgw5r ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
May 22 21:27:36.150: INFO: ExecWithOptions: Clientset creation
May 22 21:27:36.150: INFO: ExecWithOptions: execute(POST https://api.ocp1.k8s.work:6443/api/v1/namespaces/e2e-network-policy-7849/pods/hostexec-worker01-zgw5r/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
May 22 21:27:36.315: INFO: exec worker01: command: lsmod | grep sctp
May 22 21:27:36.315: INFO: exec worker01: stdout: ""
May 22 21:27:36.315: INFO: exec worker01: stderr: ""
May 22 21:27:36.315: INFO: exec worker01: exit code: 0
May 22 21:27:36.315: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
May 22 21:27:36.315: INFO: the sctp module is not loaded on node: worker01
May 22 21:27:36.315: INFO: Executing cmd "lsmod | grep sctp" on node worker02
W0522 21:27:36.327247 2349 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), privileged (container "agnhost-container" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "rootfs" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:27:36.327: INFO: Waiting up to 5m0s for pod "hostexec-worker02-v2w6m" in namespace "e2e-network-policy-7849" to be "running"
May 22 21:27:36.333: INFO: Pod "hostexec-worker02-v2w6m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.446819ms
May 22 21:27:38.339: INFO: Pod "hostexec-worker02-v2w6m": Phase="Running", Reason="", readiness=true. Elapsed: 2.012326956s
May 22 21:27:38.339: INFO: Pod "hostexec-worker02-v2w6m" satisfied condition "running"
May 22 21:27:38.339: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:e2e-network-policy-7849 PodName:hostexec-worker02-v2w6m ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
May 22 21:27:38.340: INFO: ExecWithOptions: Clientset creation
May 22 21:27:38.340: INFO: ExecWithOptions: execute(POST https://api.ocp1.k8s.work:6443/api/v1/namespaces/e2e-network-policy-7849/pods/hostexec-worker02-v2w6m/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
May 22 21:27:38.452: INFO: exec worker02: command: lsmod | grep sctp
May 22 21:27:38.452: INFO: exec worker02: stdout: ""
May 22 21:27:38.452: INFO: exec worker02: stderr: ""
May 22 21:27:38.452: INFO: exec worker02: exit code: 0
May 22 21:27:38.452: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
May 22 21:27:38.452: INFO: the sctp module is not loaded on node: worker02
May 22 21:27:38.452: INFO: Executing cmd "lsmod | grep sctp" on node worker03
W0522 21:27:38.465375 2349 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), privileged (container "agnhost-container" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "agnhost-container" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "agnhost-container" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "rootfs" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "agnhost-container" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "agnhost-container" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:27:38.465: INFO: Waiting up to 5m0s for pod "hostexec-worker03-5ff7h" in namespace "e2e-network-policy-7849" to be "running"
May 22 21:27:38.474: INFO: Pod "hostexec-worker03-5ff7h": Phase="Pending", Reason="", readiness=false. Elapsed: 8.945407ms
May 22 21:27:40.483: INFO: Pod "hostexec-worker03-5ff7h": Phase="Running", Reason="", readiness=true. Elapsed: 2.01841982s
May 22 21:27:40.483: INFO: Pod "hostexec-worker03-5ff7h" satisfied condition "running"
May 22 21:27:40.483: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c lsmod | grep sctp] Namespace:e2e-network-policy-7849 PodName:hostexec-worker03-5ff7h ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
May 22 21:27:40.484: INFO: ExecWithOptions: Clientset creation
May 22 21:27:40.485: INFO: ExecWithOptions: execute(POST https://api.ocp1.k8s.work:6443/api/v1/namespaces/e2e-network-policy-7849/pods/hostexec-worker03-5ff7h/exec?command=nsenter&command=--mount%3D%2Frootfs%2Fproc%2F1%2Fns%2Fmnt&command=--&command=sh&command=-c&command=lsmod+%7C+grep+sctp&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
May 22 21:27:40.593: INFO: exec worker03: command: lsmod | grep sctp
May 22 21:27:40.594: INFO: exec worker03: stdout: ""
May 22 21:27:40.594: INFO: exec worker03: stderr: ""
May 22 21:27:40.594: INFO: exec worker03: exit code: 0
May 22 21:27:40.594: INFO: sctp module is not loaded or error occurred while executing command lsmod | grep sctp on node: command terminated with exit code 1
May 22 21:27:40.594: INFO: the sctp module is not loaded on node: worker03
STEP: Deleting pod hostexec-worker01-zgw5r in namespace e2e-network-policy-7849 05/22/23 21:27:40.594
STEP: Deleting pod hostexec-worker02-v2w6m in namespace e2e-network-policy-7849 05/22/23 21:27:40.623
STEP: Deleting pod hostexec-worker03-5ff7h in namespace e2e-network-policy-7849 05/22/23 21:27:40.643
STEP: Creating a network policy for the server which allows traffic only via SCTP on port 80. 05/22/23 21:27:40.671
STEP: Testing pods cannot connect on port 80 anymore when not using SCTP as protocol. 05/22/23 21:27:40.682
STEP: Creating client pod client-a that should not be able to connect to svc-server. 05/22/23 21:27:40.682
W0522 21:27:40.693775 2349 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:27:40.693: INFO: Waiting for client-a-ctzwc to complete.
May 22 21:27:40.693: INFO: Waiting up to 5m0s for pod "client-a-ctzwc" in namespace "e2e-network-policy-7849" to be "Succeeded or Failed"
May 22 21:27:40.697: INFO: Pod "client-a-ctzwc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.560822ms
May 22 21:27:42.701: INFO: Pod "client-a-ctzwc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007975569s
May 22 21:27:44.705: INFO: Pod "client-a-ctzwc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011120684s
STEP: Saw pod success 05/22/23 21:27:44.705
May 22 21:27:44.705: INFO: Pod "client-a-ctzwc" satisfied condition "Succeeded or Failed"
May 22 21:27:44.712: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-7849 describe po client-a-ctzwc'
May 22 21:27:44.897: INFO: stderr: ""
May 22 21:27:44.897: INFO: stdout: "Name: client-a-ctzwc\nNamespace: e2e-network-policy-7849\nPriority: 0\nService Account: default\nNode: worker01/192.168.200.31\nStart Time: Mon, 22 May 2023 21:27:40 +0000\nLabels: pod-name=client-a\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::5fb\",\n \"10.128.11.84\"\n ],\n \"mac\": \"96:b8:c1:17:3c:d6\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::5fb\",\n \"10.128.11.84\"\n ],\n \"mac\": \"96:b8:c1:17:3c:d6\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Succeeded\nIP: 10.128.11.84\nIPs:\n IP: 10.128.11.84\n IP: fd00::5fb\nContainers:\n client:\n Container ID: cri-o://20b21f9eff051e259a9523e9397beb7ffdee163e858700be568374916ce22020\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: <none>\n Host Port: <none>\n Command:\n /bin/sh\n Args:\n -c\n for i in $(seq 1 5); do /agnhost connect 172.30.165.196:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1\n State: Terminated\n Reason: Completed\n Exit Code: 0\n Started: Mon, 22 May 2023 21:27:41 +0000\n Finished: Mon, 22 May 2023 21:27:41 +0000\n Ready: False\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bfhqm (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-bfhqm:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: <nil>\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned e2e-network-policy-7849/client-a-ctzwc to worker01 by cp01\n Normal AddedInterface 3s multus Add eth0 [fd00::5fb/128 10.128.11.84/32] from cilium\n Normal Pulled 3s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 3s kubelet Created container client\n Normal Started 3s kubelet Started container client\n"
May 22 21:27:44.898: INFO:
Output of kubectl describe client-a-ctzwc:
Name: client-a-ctzwc
Namespace: e2e-network-policy-7849
Priority: 0
Service Account: default
Node: worker01/192.168.200.31
Start Time: Mon, 22 May 2023 21:27:40 +0000
Labels: pod-name=client-a
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "cilium",
"interface": "eth0",
"ips": [
"fd00::5fb",
"10.128.11.84"
],
"mac": "96:b8:c1:17:3c:d6",
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "cilium",
"interface": "eth0",
"ips": [
"fd00::5fb",
"10.128.11.84"
],
"mac": "96:b8:c1:17:3c:d6",
"default": true,
"dns": {}
}]
Status: Succeeded
IP: 10.128.11.84
IPs:
IP: 10.128.11.84
IP: fd00::5fb
Containers:
client:
Container ID: cri-o://20b21f9eff051e259a9523e9397beb7ffdee163e858700be568374916ce22020
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
for i in $(seq 1 5); do /agnhost connect 172.30.165.196:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 22 May 2023 21:27:41 +0000
Finished: Mon, 22 May 2023 21:27:41 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bfhqm (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-bfhqm:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional: <nil>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4s default-scheduler Successfully assigned e2e-network-policy-7849/client-a-ctzwc to worker01 by cp01
Normal AddedInterface 3s multus Add eth0 [fd00::5fb/128 10.128.11.84/32] from cilium
Normal Pulled 3s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 3s kubelet Created container client
Normal Started 3s kubelet Started container client
May 22 21:27:44.898: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-7849 logs client-a-ctzwc --tail=100'
May 22 21:27:45.044: INFO: stderr: ""
May 22 21:27:45.044: INFO: stdout: ""
May 22 21:27:45.044: INFO:
Last 100 log lines of client-a-ctzwc:
May 22 21:27:45.044: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-7849 describe po server-wd8cf'
May 22 21:27:45.183: INFO: stderr: ""
May 22 21:27:45.183: INFO: stdout: "Name: server-wd8cf\nNamespace: e2e-network-policy-7849\nPriority: 0\nService Account: default\nNode: worker01/192.168.200.31\nStart Time: Mon, 22 May 2023 21:27:17 +0000\nLabels: pod-name=server\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::5ec\",\n \"10.128.11.99\"\n ],\n \"mac\": \"f6:d0:a1:a9:79:5b\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::5ec\",\n \"10.128.11.99\"\n ],\n \"mac\": \"f6:d0:a1:a9:79:5b\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.11.99\nIPs:\n IP: 10.128.11.99\n IP: fd00::5ec\nContainers:\n server-container-80:\n Container ID: cri-o://f790810fbd57e257ccc887756a259b6478576001e0795ad353eb8e9c63ca6acf\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 22 May 2023 21:27:19 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qfdxb (ro)\n server-container-81:\n Container ID: cri-o://6297087df6a061490efee44e2ac42da9e47bf51b78dcec55afad922a7911d6f9\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 81/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 22 May 2023 21:27:19 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_81: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qfdxb (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-qfdxb:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: <nil>\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 27s default-scheduler Successfully assigned e2e-network-policy-7849/server-wd8cf to worker01 by cp01\n Normal AddedInterface 27s multus Add eth0 [fd00::5ec/128 10.128.11.99/32] from cilium\n Normal Pulled 27s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 26s kubelet Created container server-container-80\n Normal Started 26s kubelet Started container server-container-80\n Normal Pulled 26s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 26s kubelet Created container server-container-81\n Normal Started 26s kubelet Started container server-container-81\n"
May 22 21:27:45.183: INFO:
Output of kubectl describe server-wd8cf:
Name: server-wd8cf
Namespace: e2e-network-policy-7849
Priority: 0
Service Account: default
Node: worker01/192.168.200.31
Start Time: Mon, 22 May 2023 21:27:17 +0000
Labels: pod-name=server
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "cilium",
"interface": "eth0",
"ips": [
"fd00::5ec",
"10.128.11.99"
],
"mac": "f6:d0:a1:a9:79:5b",
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "cilium",
"interface": "eth0",
"ips": [
"fd00::5ec",
"10.128.11.99"
],
"mac": "f6:d0:a1:a9:79:5b",
"default": true,
"dns": {}
}]
Status: Running
IP: 10.128.11.99
IPs:
IP: 10.128.11.99
IP: fd00::5ec
Containers:
server-container-80:
Container ID: cri-o://f790810fbd57e257ccc887756a259b6478576001e0795ad353eb8e9c63ca6acf
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: 80/TCP
Host Port: 0/TCP
Args:
porter
State: Running
Started: Mon, 22 May 2023 21:27:19 +0000
Ready: True
Restart Count: 0
Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVE_PORT_80: foo
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qfdxb (ro)
server-container-81:
Container ID: cri-o://6297087df6a061490efee44e2ac42da9e47bf51b78dcec55afad922a7911d6f9
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: 81/TCP
Host Port: 0/TCP
Args:
porter
State: Running
Started: Mon, 22 May 2023 21:27:19 +0000
Ready: True
Restart Count: 0
Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVE_PORT_81: foo
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qfdxb (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-qfdxb:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional: <nil>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27s default-scheduler Successfully assigned e2e-network-policy-7849/server-wd8cf to worker01 by cp01
Normal AddedInterface 27s multus Add eth0 [fd00::5ec/128 10.128.11.99/32] from cilium
Normal Pulled 27s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 26s kubelet Created container server-container-80
Normal Started 26s kubelet Started container server-container-80
Normal Pulled 26s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 26s kubelet Created container server-container-81
Normal Started 26s kubelet Started container server-container-81
May 22 21:27:45.183: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-7849 logs server-wd8cf --tail=100'
May 22 21:27:45.320: INFO: stderr: "Defaulted container \"server-container-80\" out of: server-container-80, server-container-81\n"
May 22 21:27:45.320: INFO: stdout: ""
May 22 21:27:45.320: INFO:
Last 100 log lines of server-wd8cf:
May 22 21:27:45.346: FAIL: Pod client-a-ctzwc should not be able to connect to service svc-server, but was able to connect.
Pod logs:
Current NetworkPolicies:
[{{ } {allow-only-sctp-ingress-on-port-80 e2e-network-policy-7849 188a72f0-9242-4f57-844a-3acc062f171f 71898 1 2023-05-22 21:27:40 +0000 UTC <nil> <nil> map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-05-22 21:27:40 +0000 UTC FieldsV1 {"f:spec":{"f:ingress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:server] []} [{[{0xc0023f7840 80 <nil>}] []}] [] [Ingress]} {[]}}]
Pods:
[Pod: client-a-ctzwc, Status: &PodStatus{Phase:Succeeded,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:40 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:40 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:40 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.11.84,StartTime:2023-05-22 21:27:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-22 21:27:41 +0000 UTC,FinishedAt:2023-05-22 21:27:41 +0000 UTC,ContainerID:cri-o://20b21f9eff051e259a9523e9397beb7ffdee163e858700be568374916ce22020,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://20b21f9eff051e259a9523e9397beb7ffdee163e858700be568374916ce22020,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.11.84,},PodIP{IP:fd00::5fb,},},EphemeralContainerStatuses:[]ContainerStatus{},}
Pod: server-wd8cf, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.11.99,StartTime:2023-05-22 21:27:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-05-22 21:27:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://f790810fbd57e257ccc887756a259b6478576001e0795ad353eb8e9c63ca6acf,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-05-22 21:27:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://6297087df6a061490efee44e2ac42da9e47bf51b78dcec55afad922a7911d6f9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.11.99,},PodIP{IP:fd00::5ec,},},EphemeralContainerStatuses:[]ContainerStatus{},}
]
Full Stack Trace
k8s.io/kubernetes/test/e2e/network/netpol.checkNoConnectivity(0xc001ffa3c0, 0xc001e70b00, 0xc007444480, 0xc006e72c80)
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1957 +0x25a
k8s.io/kubernetes/test/e2e/network/netpol.testCannotConnectProtocol(0xc001ffa3c0, 0xc001e70b00, {0x8a31d3a, 0x8}, 0xc006e72c80, 0x0?, {0x8a2370a, 0x3})
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1926 +0x1be
k8s.io/kubernetes/test/e2e/network/netpol.testCannotConnect(...)
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1901
k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.31()
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1721 +0x3d3
github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2e8acde, 0xc00122f980})
github.com/onsi/ginkgo/v2@v2.4.0/internal/node.go:449 +0x1b
github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2()
github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:757 +0x98
created by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:745 +0xe3d
STEP: Cleaning up the pod client-a-ctzwc 05/22/23 21:27:45.346
STEP: Cleaning up the policy. 05/22/23 21:27:45.371
[AfterEach] NetworkPolicy between server and client
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:96
STEP: Cleaning up the server. 05/22/23 21:27:45.378
STEP: Cleaning up the server's service. 05/22/23 21:27:45.392
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:33
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
dump namespaces | framework.go:196
STEP: dump namespace information after failure 05/22/23 21:27:45.443
STEP: Collecting events from namespace "e2e-network-policy-7849". 05/22/23 21:27:45.443
STEP: Found 40 events. 05/22/23 21:27:45.451
May 22 21:27:45.451: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-ctzwc: { } Scheduled: Successfully assigned e2e-network-policy-7849/client-a-ctzwc to worker01 by cp01
May 22 21:27:45.451: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-f7cth: { } Scheduled: Successfully assigned e2e-network-policy-7849/client-can-connect-80-f7cth to worker02 by cp01
May 22 21:27:45.451: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-dlstt: { } Scheduled: Successfully assigned e2e-network-policy-7849/client-can-connect-81-dlstt to worker03 by cp01
May 22 21:27:45.451: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-worker01-zgw5r: { } Scheduled: Successfully assigned e2e-network-policy-7849/hostexec-worker01-zgw5r to worker01 by cp01
May 22 21:27:45.451: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-worker02-v2w6m: { } Scheduled: Successfully assigned e2e-network-policy-7849/hostexec-worker02-v2w6m to worker02 by cp01
May 22 21:27:45.451: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for hostexec-worker03-5ff7h: { } Scheduled: Successfully assigned e2e-network-policy-7849/hostexec-worker03-5ff7h to worker03 by cp01
May 22 21:27:45.451: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-wd8cf: { } Scheduled: Successfully assigned e2e-network-policy-7849/server-wd8cf to worker01 by cp01
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:18 +0000 UTC - event for server-wd8cf: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:18 +0000 UTC - event for server-wd8cf: {multus } AddedInterface: Add eth0 [fd00::5ec/128 10.128.11.99/32] from cilium
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:19 +0000 UTC - event for server-wd8cf: {kubelet worker01} Started: Started container server-container-80
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:19 +0000 UTC - event for server-wd8cf: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:19 +0000 UTC - event for server-wd8cf: {kubelet worker01} Created: Created container server-container-80
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:19 +0000 UTC - event for server-wd8cf: {kubelet worker01} Created: Created container server-container-81
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:19 +0000 UTC - event for server-wd8cf: {kubelet worker01} Started: Started container server-container-81
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:23 +0000 UTC - event for client-can-connect-80-f7cth: {multus } AddedInterface: Add eth0 [fd00::431/128 10.128.8.254/32] from cilium
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:23 +0000 UTC - event for client-can-connect-80-f7cth: {kubelet worker02} Started: Started container client
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:23 +0000 UTC - event for client-can-connect-80-f7cth: {kubelet worker02} Created: Created container client
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:23 +0000 UTC - event for client-can-connect-80-f7cth: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:29 +0000 UTC - event for client-can-connect-81-dlstt: {kubelet worker03} Created: Created container client
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:29 +0000 UTC - event for client-can-connect-81-dlstt: {kubelet worker03} Started: Started container client
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:29 +0000 UTC - event for client-can-connect-81-dlstt: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:29 +0000 UTC - event for client-can-connect-81-dlstt: {multus } AddedInterface: Add eth0 [fd00::39d/128 10.128.6.207/32] from cilium
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:34 +0000 UTC - event for hostexec-worker01-zgw5r: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:35 +0000 UTC - event for hostexec-worker01-zgw5r: {kubelet worker01} Started: Started container agnhost-container
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:35 +0000 UTC - event for hostexec-worker01-zgw5r: {kubelet worker01} Created: Created container agnhost-container
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:36 +0000 UTC - event for hostexec-worker02-v2w6m: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:36 +0000 UTC - event for hostexec-worker02-v2w6m: {kubelet worker02} Started: Started container agnhost-container
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:36 +0000 UTC - event for hostexec-worker02-v2w6m: {kubelet worker02} Created: Created container agnhost-container
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:39 +0000 UTC - event for hostexec-worker03-5ff7h: {kubelet worker03} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:39 +0000 UTC - event for hostexec-worker03-5ff7h: {kubelet worker03} Created: Created container agnhost-container
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:39 +0000 UTC - event for hostexec-worker03-5ff7h: {kubelet worker03} Started: Started container agnhost-container
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:40 +0000 UTC - event for hostexec-worker01-zgw5r: {kubelet worker01} Killing: Stopping container agnhost-container
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:40 +0000 UTC - event for hostexec-worker02-v2w6m: {kubelet worker02} Killing: Stopping container agnhost-container
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:41 +0000 UTC - event for client-a-ctzwc: {kubelet worker01} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:41 +0000 UTC - event for client-a-ctzwc: {kubelet worker01} Created: Created container client
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:41 +0000 UTC - event for client-a-ctzwc: {kubelet worker01} Started: Started container client
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:41 +0000 UTC - event for client-a-ctzwc: {multus } AddedInterface: Add eth0 [fd00::5fb/128 10.128.11.84/32] from cilium
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:41 +0000 UTC - event for hostexec-worker03-5ff7h: {kubelet worker03} Killing: Stopping container agnhost-container
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:45 +0000 UTC - event for server-wd8cf: {kubelet worker01} Killing: Stopping container server-container-80
May 22 21:27:45.451: INFO: At 2023-05-22 21:27:45 +0000 UTC - event for server-wd8cf: {kubelet worker01} Killing: Stopping container server-container-81
May 22 21:27:45.457: INFO: POD NODE PHASE GRACE CONDITIONS
May 22 21:27:45.457: INFO: server-wd8cf worker01 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:27:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:27:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:27:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:27:17 +0000 UTC }]
May 22 21:27:45.457: INFO:
May 22 21:27:45.466: INFO: skipping dumping cluster info - cluster too large
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
tear down framework | framework.go:193
STEP: Destroying namespace "e2e-network-policy-7849" for this suite. 05/22/23 21:27:45.466
fail [k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1957]: May 22 21:27:45.346: Pod client-a-ctzwc should not be able to connect to service svc-server, but was able to connect.
Pod logs:
Current NetworkPolicies:
[{{ } {allow-only-sctp-ingress-on-port-80 e2e-network-policy-7849 188a72f0-9242-4f57-844a-3acc062f171f 71898 1 2023-05-22 21:27:40 +0000 UTC <nil> <nil> map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-05-22 21:27:40 +0000 UTC FieldsV1 {"f:spec":{"f:ingress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:server] []} [{[{0xc0023f7840 80 <nil>}] []}] [] [Ingress]} {[]}}]
Pods:
[Pod: client-a-ctzwc, Status: &PodStatus{Phase:Succeeded,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:40 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:40 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:40 +0000 UTC,Reason:PodCompleted,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.11.84,StartTime:2023-05-22 21:27:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2023-05-22 21:27:41 +0000 UTC,FinishedAt:2023-05-22 21:27:41 +0000 UTC,ContainerID:cri-o://20b21f9eff051e259a9523e9397beb7ffdee163e858700be568374916ce22020,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://20b21f9eff051e259a9523e9397beb7ffdee163e858700be568374916ce22020,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.11.84,},PodIP{IP:fd00::5fb,},},EphemeralContainerStatuses:[]ContainerStatus{},}
Pod: server-wd8cf, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:27:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.31,PodIP:10.128.11.99,StartTime:2023-05-22 21:27:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-05-22 21:27:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://f790810fbd57e257ccc887756a259b6478576001e0795ad353eb8e9c63ca6acf,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-05-22 21:27:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://6297087df6a061490efee44e2ac42da9e47bf51b78dcec55afad922a7911d6f9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.11.99,},PodIP{IP:fd00::5ec,},},EphemeralContainerStatuses:[]ContainerStatus{},}
]
Ginkgo exit error 1: exit with code 1
failed: (29s) 2023-05-22T21:27:45 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 2/47/67 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (3.4s) 2023-05-22T21:27:48 "[sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/48/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m12s) 2023-05-22T21:27:51 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 2/49/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (20.9s) 2023-05-22T21:27:54 "[sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/50/67 "[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (19.4s) 2023-05-22T21:27:58 "[sig-network] Services should serve a basic endpoint from pods [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/51/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (11.5s) 2023-05-22T21:28:05 "[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/52/67 "[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1.3s) 2023-05-22T21:28:07 "[sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/53/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m9s) 2023-05-22T21:28:29 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 2/54/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m6s) 2023-05-22T21:28:30 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 2/55/67 "[sig-network] DNS should provide DNS for pods for Hostname [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (5.4s) 2023-05-22T21:28:35 "[sig-network] DNS should provide DNS for pods for Hostname [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/56/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (2m6s) 2023-05-22T21:28:39 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 2/57/67 "[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (10.4s) 2023-05-22T21:28:49 "[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/58/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m16s) 2023-05-22T21:28:50 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 2/59/67 "[sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (5.3s) 2023-05-22T21:28:56 "[sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/60/67 "[sig-network] Services should test the lifecycle of an Endpoint [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (21.8s) 2023-05-22T21:28:57 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 2/61/67 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1.5s) 2023-05-22T21:28:57 "[sig-network] Services should test the lifecycle of an Endpoint [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/62/67 "[sig-network] DNS should provide DNS for ExternalName services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (2m5s) 2023-05-22T21:29:13 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 2/63/67 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (18.8s) 2023-05-22T21:29:16 "[sig-network] DNS should provide DNS for ExternalName services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/64/67 "[sig-network] DNS should provide DNS for services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (29s) 2023-05-22T21:29:26 "[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/65/67 "[sig-network] DNS should provide DNS for the cluster [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (13.7s) 2023-05-22T21:29:30 "[sig-network] DNS should provide DNS for services [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
started: 2/66/67 "[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (1.4s) 2023-05-22T21:29:31 "[sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (5.4s) 2023-05-22T21:29:31 "[sig-network] DNS should provide DNS for the cluster [Conformance] [Skipped:Proxy] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"
passed: (2m4s) 2023-05-22T21:29:55 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m10s) 2023-05-22T21:29:59 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (2m12s) 2023-05-22T21:30:01 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
May 22 21:28:07.283: INFO: Enabling in-tree volume drivers
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/framework.go:1496
[BeforeEach] TOP-LEVEL
github.com/openshift/origin/test/extended/util/test.go:58
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
set up framework | framework.go:178
STEP: Creating a kubernetes client 05/22/23 21:28:08.08
STEP: Building a namespace api object, basename network-policy 05/22/23 21:28:08.082
May 22 21:28:08.137: INFO: About to run a Kube e2e test, ensuring namespace is privileged
STEP: Waiting for a default service account to be provisioned in namespace 05/22/23 21:28:08.304
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 05/22/23 21:28:08.31
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:31
[BeforeEach] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:72
[BeforeEach] NetworkPolicy between server and client
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:78
STEP: Creating a simple server that serves on port 80 and 81. 05/22/23 21:28:08.318
STEP: Creating a server pod server in namespace e2e-network-policy-3817 05/22/23 21:28:08.318
W0522 21:28:08.342476 3790 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "server-container-80", "server-container-81" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "server-container-80", "server-container-81" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "server-container-80", "server-container-81" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "server-container-80", "server-container-81" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:28:08.342: INFO: Created pod server-42b5r
STEP: Creating a service svc-server for pod server in namespace e2e-network-policy-3817 05/22/23 21:28:08.342
May 22 21:28:08.374: INFO: Created service svc-server
STEP: Waiting for pod ready 05/22/23 21:28:08.374
May 22 21:28:08.374: INFO: Waiting up to 5m0s for pod "server-42b5r" in namespace "e2e-network-policy-3817" to be "running and ready"
May 22 21:28:08.379: INFO: Pod "server-42b5r": Phase="Pending", Reason="", readiness=false. Elapsed: 5.070982ms
May 22 21:28:08.379: INFO: The phase of Pod server-42b5r is Pending, waiting for it to be Running (with Ready = true)
May 22 21:28:10.385: INFO: Pod "server-42b5r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010812737s
May 22 21:28:10.385: INFO: The phase of Pod server-42b5r is Pending, waiting for it to be Running (with Ready = true)
May 22 21:28:12.385: INFO: Pod "server-42b5r": Phase="Running", Reason="", readiness=true. Elapsed: 4.010276115s
May 22 21:28:12.385: INFO: The phase of Pod server-42b5r is Running (Ready = true)
May 22 21:28:12.385: INFO: Pod "server-42b5r" satisfied condition "running and ready"
STEP: Testing pods can connect to both ports when no policy is present. 05/22/23 21:28:12.385
STEP: Creating client pod client-can-connect-80 that should successfully connect to svc-server. 05/22/23 21:28:12.385
W0522 21:28:12.398983 3790 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:28:12.399: INFO: Waiting for client-can-connect-80-blgwj to complete.
May 22 21:28:12.399: INFO: Waiting up to 3m0s for pod "client-can-connect-80-blgwj" in namespace "e2e-network-policy-3817" to be "completed"
May 22 21:28:12.413: INFO: Pod "client-can-connect-80-blgwj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.690766ms
May 22 21:28:14.538: INFO: Pod "client-can-connect-80-blgwj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139205312s
May 22 21:28:16.420: INFO: Pod "client-can-connect-80-blgwj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020986067s
May 22 21:28:18.419: INFO: Pod "client-can-connect-80-blgwj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02020048s
May 22 21:28:18.419: INFO: Pod "client-can-connect-80-blgwj" satisfied condition "completed"
May 22 21:28:18.419: INFO: Waiting for client-can-connect-80-blgwj to complete.
May 22 21:28:18.419: INFO: Waiting up to 5m0s for pod "client-can-connect-80-blgwj" in namespace "e2e-network-policy-3817" to be "Succeeded or Failed"
May 22 21:28:18.423: INFO: Pod "client-can-connect-80-blgwj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.206406ms
STEP: Saw pod success 05/22/23 21:28:18.423
May 22 21:28:18.423: INFO: Pod "client-can-connect-80-blgwj" satisfied condition "Succeeded or Failed"
STEP: Cleaning up the pod client-can-connect-80-blgwj 05/22/23 21:28:18.423
STEP: Creating client pod client-can-connect-81 that should successfully connect to svc-server. 05/22/23 21:28:18.443
W0522 21:28:18.458119 3790 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:28:18.458: INFO: Waiting for client-can-connect-81-b25hm to complete.
May 22 21:28:18.458: INFO: Waiting up to 3m0s for pod "client-can-connect-81-b25hm" in namespace "e2e-network-policy-3817" to be "completed"
May 22 21:28:18.461: INFO: Pod "client-can-connect-81-b25hm": Phase="Pending", Reason="", readiness=false. Elapsed: 3.421448ms
May 22 21:28:20.468: INFO: Pod "client-can-connect-81-b25hm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010675777s
May 22 21:28:22.467: INFO: Pod "client-can-connect-81-b25hm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009023619s
May 22 21:28:22.467: INFO: Pod "client-can-connect-81-b25hm" satisfied condition "completed"
May 22 21:28:22.467: INFO: Waiting for client-can-connect-81-b25hm to complete.
May 22 21:28:22.467: INFO: Waiting up to 5m0s for pod "client-can-connect-81-b25hm" in namespace "e2e-network-policy-3817" to be "Succeeded or Failed"
May 22 21:28:22.471: INFO: Pod "client-can-connect-81-b25hm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.796831ms
STEP: Saw pod success 05/22/23 21:28:22.471
May 22 21:28:22.471: INFO: Pod "client-can-connect-81-b25hm" satisfied condition "Succeeded or Failed"
STEP: Cleaning up the pod client-can-connect-81-b25hm 05/22/23 21:28:22.471
[It] should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1477
STEP: Creating client-a which should not be able to contact the server. 05/22/23 21:28:22.527
STEP: Creating client pod client-a that should not be able to connect to svc-server. 05/22/23 21:28:22.527
W0522 21:28:22.541913 3790 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:28:22.542: INFO: Waiting for client-a-pf5jg to complete.
May 22 21:28:22.542: INFO: Waiting up to 5m0s for pod "client-a-pf5jg" in namespace "e2e-network-policy-3817" to be "Succeeded or Failed"
May 22 21:28:22.545: INFO: Pod "client-a-pf5jg": Phase="Pending", Reason="", readiness=false. Elapsed: 3.674454ms
May 22 21:28:24.551: INFO: Pod "client-a-pf5jg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009884701s
May 22 21:28:26.552: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 4.010722948s
May 22 21:28:28.551: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 6.009207683s
May 22 21:28:30.552: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 8.010398576s
May 22 21:28:32.553: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 10.011568493s
May 22 21:28:34.553: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 12.011240114s
May 22 21:28:36.552: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 14.01082942s
May 22 21:28:38.551: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 16.009059106s
May 22 21:28:40.553: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 18.011477873s
May 22 21:28:42.550: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 20.00887075s
May 22 21:28:44.552: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 22.010538834s
May 22 21:28:46.552: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 24.01054423s
May 22 21:28:48.551: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 26.009859174s
May 22 21:28:50.557: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 28.015867014s
May 22 21:28:52.552: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 30.010309642s
May 22 21:28:54.553: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 32.011009608s
May 22 21:28:56.551: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 34.008984481s
May 22 21:28:58.555: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 36.013457001s
May 22 21:29:00.552: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 38.010125774s
May 22 21:29:02.551: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 40.009030416s
May 22 21:29:04.551: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 42.009292092s
May 22 21:29:06.551: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 44.009432295s
May 22 21:29:08.551: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=true. Elapsed: 46.009526222s
May 22 21:29:10.554: INFO: Pod "client-a-pf5jg": Phase="Running", Reason="", readiness=false. Elapsed: 48.01209276s
May 22 21:29:12.555: INFO: Pod "client-a-pf5jg": Phase="Failed", Reason="", readiness=false. Elapsed: 50.013669388s
STEP: Cleaning up the pod client-a-pf5jg 05/22/23 21:29:12.555
STEP: Creating client-a which should now be able to contact the server. 05/22/23 21:29:12.583
STEP: Creating client pod client-a that should successfully connect to svc-server. 05/22/23 21:29:12.583
W0522 21:29:12.602753 3790 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "client" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "client" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "client" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "client" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
May 22 21:29:12.602: INFO: Waiting for client-a-xdvq7 to complete.
May 22 21:29:12.602: INFO: Waiting up to 3m0s for pod "client-a-xdvq7" in namespace "e2e-network-policy-3817" to be "completed"
May 22 21:29:12.607: INFO: Pod "client-a-xdvq7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.860909ms
May 22 21:29:14.615: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 2.012762148s
May 22 21:29:16.614: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 4.011313785s
May 22 21:29:18.615: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 6.01212446s
May 22 21:29:20.614: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 8.011558107s
May 22 21:29:22.613: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 10.010826883s
May 22 21:29:24.616: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 12.013504211s
May 22 21:29:26.612: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 14.010086783s
May 22 21:29:28.614: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 16.011850228s
May 22 21:29:30.613: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 18.010337504s
May 22 21:29:32.622: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 20.019846348s
May 22 21:29:34.613: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 22.011043901s
May 22 21:29:36.615: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 24.012725847s
May 22 21:29:38.614: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 26.011655706s
May 22 21:29:40.613: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 28.010620174s
May 22 21:29:42.613: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 30.01070776s
May 22 21:29:44.614: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 32.011782284s
May 22 21:29:46.613: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 34.010592779s
May 22 21:29:48.613: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 36.010349987s
May 22 21:29:50.615: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 38.012185845s
May 22 21:29:52.613: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 40.010496514s
May 22 21:29:54.616: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 42.013655104s
May 22 21:29:56.612: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 44.009900203s
May 22 21:29:58.616: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=true. Elapsed: 46.013233471s
May 22 21:30:00.613: INFO: Pod "client-a-xdvq7": Phase="Running", Reason="", readiness=false. Elapsed: 48.010391073s
May 22 21:30:02.615: INFO: Pod "client-a-xdvq7": Phase="Failed", Reason="", readiness=false. Elapsed: 50.012521885s
May 22 21:30:02.615: INFO: Pod "client-a-xdvq7" satisfied condition "completed"
May 22 21:30:02.615: INFO: Waiting for client-a-xdvq7 to complete.
May 22 21:30:02.615: INFO: Waiting up to 5m0s for pod "client-a-xdvq7" in namespace "e2e-network-policy-3817" to be "Succeeded or Failed"
May 22 21:30:02.620: INFO: Pod "client-a-xdvq7": Phase="Failed", Reason="", readiness=false. Elapsed: 4.600984ms
May 22 21:30:02.624: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-3817 describe po client-a-xdvq7'
May 22 21:30:02.777: INFO: stderr: ""
May 22 21:30:02.777: INFO: stdout: "Name: client-a-xdvq7\nNamespace: e2e-network-policy-3817\nPriority: 0\nService Account: default\nNode: worker02/192.168.200.32\nStart Time: Mon, 22 May 2023 21:29:12 +0000\nLabels: pod-name=client-a\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::47d\",\n \"10.128.8.239\"\n ],\n \"mac\": \"aa:c9:f8:60:02:b3\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::47d\",\n \"10.128.8.239\"\n ],\n \"mac\": \"aa:c9:f8:60:02:b3\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Failed\nIP: 10.128.8.239\nIPs:\n IP: 10.128.8.239\n IP: fd00::47d\nContainers:\n client:\n Container ID: cri-o://2a82936d4ecec1afca47c3243b410dca7a556d0e70795c1a42187e9d25f47ae0\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: <none>\n Host Port: <none>\n Command:\n /bin/sh\n Args:\n -c\n for i in $(seq 1 5); do /agnhost connect 172.30.249.74:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1\n State: Terminated\n Reason: Error\n Exit Code: 1\n Started: Mon, 22 May 2023 21:29:13 +0000\n Finished: Mon, 22 May 2023 21:29:58 +0000\n Ready: False\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8h2cr (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n kube-api-access-8h2cr:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: <nil>\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-3817/client-a-xdvq7 to worker02 by cp01\n Normal AddedInterface 49s multus Add eth0 [fd00::47d/128 10.128.8.239/32] from cilium\n Normal Pulled 49s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 49s kubelet Created container client\n Normal Started 49s kubelet Started container client\n"
May 22 21:30:02.777: INFO:
Output of kubectl describe client-a-xdvq7:
Name: client-a-xdvq7
Namespace: e2e-network-policy-3817
Priority: 0
Service Account: default
Node: worker02/192.168.200.32
Start Time: Mon, 22 May 2023 21:29:12 +0000
Labels: pod-name=client-a
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "cilium",
"interface": "eth0",
"ips": [
"fd00::47d",
"10.128.8.239"
],
"mac": "aa:c9:f8:60:02:b3",
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "cilium",
"interface": "eth0",
"ips": [
"fd00::47d",
"10.128.8.239"
],
"mac": "aa:c9:f8:60:02:b3",
"default": true,
"dns": {}
}]
Status: Failed
IP: 10.128.8.239
IPs:
IP: 10.128.8.239
IP: fd00::47d
Containers:
client:
Container ID: cri-o://2a82936d4ecec1afca47c3243b410dca7a556d0e70795c1a42187e9d25f47ae0
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
for i in $(seq 1 5); do /agnhost connect 172.30.249.74:80 --protocol tcp --timeout 8s && exit 0 || sleep 1; done; exit 1
State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 22 May 2023 21:29:13 +0000
Finished: Mon, 22 May 2023 21:29:58 +0000
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8h2cr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-8h2cr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional: <nil>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50s default-scheduler Successfully assigned e2e-network-policy-3817/client-a-xdvq7 to worker02 by cp01
Normal AddedInterface 49s multus Add eth0 [fd00::47d/128 10.128.8.239/32] from cilium
Normal Pulled 49s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 49s kubelet Created container client
Normal Started 49s kubelet Started container client
May 22 21:30:02.777: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-3817 logs client-a-xdvq7 --tail=100'
May 22 21:30:02.922: INFO: stderr: ""
May 22 21:30:02.922: INFO: stdout: "TIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\nTIMEOUT\n"
May 22 21:30:02.922: INFO:
Last 100 log lines of client-a-xdvq7:
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
May 22 21:30:02.922: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-3817 describe po server-42b5r'
May 22 21:30:03.065: INFO: stderr: ""
May 22 21:30:03.065: INFO: stdout: "Name: server-42b5r\nNamespace: e2e-network-policy-3817\nPriority: 0\nService Account: default\nNode: worker02/192.168.200.32\nStart Time: Mon, 22 May 2023 21:28:08 +0000\nLabels: pod-name=server\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::4b5\",\n \"10.128.8.89\"\n ],\n \"mac\": \"12:91:a3:3f:64:f2\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"cilium\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"fd00::4b5\",\n \"10.128.8.89\"\n ],\n \"mac\": \"12:91:a3:3f:64:f2\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.128.8.89\nIPs:\n IP: 10.128.8.89\n IP: fd00::4b5\nContainers:\n server-container-80:\n Container ID: cri-o://e7598b27071c4638739924174ffde3a1dfe074d6be088888a72f1e22b241a05f\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 80/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 22 May 2023 21:28:09 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_80: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t28hb (ro)\n server-container-81:\n Container ID: cri-o://661ff5e8cf45a6ef472d5ef02d4ba6ea0ab053378af3a6f8d8a93ae07f303e8b\n Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\n Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 81/TCP\n Host Port: 0/TCP\n Args:\n porter\n State: Running\n Started: Mon, 22 May 2023 21:28:09 +0000\n Ready: True\n Restart Count: 0\n Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3\n Environment:\n SERVE_PORT_81: foo\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t28hb (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-t28hb:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\n ConfigMapName: openshift-service-ca.crt\n ConfigMapOptional: <nil>\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 114s default-scheduler Successfully assigned e2e-network-policy-3817/server-42b5r to worker02 by cp01\n Normal AddedInterface 114s multus Add eth0 [fd00::4b5/128 10.128.8.89/32] from cilium\n Normal Pulled 114s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 114s kubelet Created container server-container-80\n Normal Started 114s kubelet Started container server-container-80\n Normal Pulled 114s kubelet Container image \"quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-\" already present on machine\n Normal Created 114s kubelet Created container server-container-81\n Normal Started 114s kubelet Started container server-container-81\n"
May 22 21:30:03.065: INFO:
Output of kubectl describe server-42b5r:
Name: server-42b5r
Namespace: e2e-network-policy-3817
Priority: 0
Service Account: default
Node: worker02/192.168.200.32
Start Time: Mon, 22 May 2023 21:28:08 +0000
Labels: pod-name=server
Annotations: k8s.v1.cni.cncf.io/network-status:
[{
"name": "cilium",
"interface": "eth0",
"ips": [
"fd00::4b5",
"10.128.8.89"
],
"mac": "12:91:a3:3f:64:f2",
"default": true,
"dns": {}
}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "cilium",
"interface": "eth0",
"ips": [
"fd00::4b5",
"10.128.8.89"
],
"mac": "12:91:a3:3f:64:f2",
"default": true,
"dns": {}
}]
Status: Running
IP: 10.128.8.89
IPs:
IP: 10.128.8.89
IP: fd00::4b5
Containers:
server-container-80:
Container ID: cri-o://e7598b27071c4638739924174ffde3a1dfe074d6be088888a72f1e22b241a05f
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: 80/TCP
Host Port: 0/TCP
Args:
porter
State: Running
Started: Mon, 22 May 2023 21:28:09 +0000
Ready: True
Restart Count: 0
Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:80] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVE_PORT_80: foo
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t28hb (ro)
server-container-81:
Container ID: cri-o://661ff5e8cf45a6ef472d5ef02d4ba6ea0ab053378af3a6f8d8a93ae07f303e8b
Image: quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-
Image ID: quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e
Port: 81/TCP
Host Port: 0/TCP
Args:
porter
State: Running
Started: Mon, 22 May 2023 21:28:09 +0000
Ready: True
Restart Count: 0
Readiness: exec [/agnhost connect --protocol=tcp --timeout=1s 127.0.0.1:81] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
SERVE_PORT_81: foo
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t28hb (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-t28hb:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
ConfigMapName: openshift-service-ca.crt
ConfigMapOptional: <nil>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 114s default-scheduler Successfully assigned e2e-network-policy-3817/server-42b5r to worker02 by cp01
Normal AddedInterface 114s multus Add eth0 [fd00::4b5/128 10.128.8.89/32] from cilium
Normal Pulled 114s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 114s kubelet Created container server-container-80
Normal Started 114s kubelet Started container server-container-80
Normal Pulled 114s kubelet Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
Normal Created 114s kubelet Created container server-container-81
Normal Started 114s kubelet Started container server-container-81
May 22 21:30:03.065: INFO: Running '/usr/bin/kubectl --server=https://api.ocp1.k8s.work:6443 --kubeconfig=/data/kubeconfig.yaml --namespace=e2e-network-policy-3817 logs server-42b5r --tail=100'
May 22 21:30:03.205: INFO: stderr: "Defaulted container \"server-container-80\" out of: server-container-80, server-container-81\n"
May 22 21:30:03.205: INFO: stdout: ""
May 22 21:30:03.205: INFO:
Last 100 log lines of server-42b5r:
May 22 21:30:03.221: FAIL: Pod client-a-xdvq7 should be able to connect to service svc-server, but was not able to connect.
Pod logs:
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
Current NetworkPolicies:
[{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-3817 9ebbfaf5-b958-4de2-aaeb-debad07e7ea9 75609 1 2023-05-22 21:29:12 +0000 UTC <nil> <nil> map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-05-22 21:29:12 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.8.89/32,Except:[],}}]}] [Egress]} {[]}} {{ } {deny-client-a-via-except-cidr-egress-rule e2e-network-policy-3817 b9a96d70-9dc0-44fb-a0f9-7a9da627ee11 73678 1 2023-05-22 21:28:22 +0000 UTC <nil> <nil> map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-05-22 21:28:22 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.8.0/24,Except:[10.128.8.89/32],}}]}] [Egress]} {[]}}]
Pods:
[Pod: client-a-xdvq7, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:29:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:29:59 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:29:59 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:29:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.8.239,StartTime:2023-05-22 21:29:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-05-22 21:29:13 +0000 UTC,FinishedAt:2023-05-22 21:29:58 +0000 UTC,ContainerID:cri-o://2a82936d4ecec1afca47c3243b410dca7a556d0e70795c1a42187e9d25f47ae0,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://2a82936d4ecec1afca47c3243b410dca7a556d0e70795c1a42187e9d25f47ae0,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.239,},PodIP{IP:fd00::47d,},},EphemeralContainerStatuses:[]ContainerStatus{},}
Pod: server-42b5r, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:28:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:28:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:28:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:28:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.8.89,StartTime:2023-05-22 21:28:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-05-22 21:28:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://e7598b27071c4638739924174ffde3a1dfe074d6be088888a72f1e22b241a05f,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-05-22 21:28:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://661ff5e8cf45a6ef472d5ef02d4ba6ea0ab053378af3a6f8d8a93ae07f303e8b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.89,},PodIP{IP:fd00::4b5,},},EphemeralContainerStatuses:[]ContainerStatus{},}
]
Full Stack Trace
k8s.io/kubernetes/test/e2e/network/netpol.checkConnectivity(0xc001f22b40, 0xc001d706e0, 0xc006db6480, 0xc006c0a780)
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941 +0x355
k8s.io/kubernetes/test/e2e/network/netpol.testCanConnectProtocol(0xc001f22b40, 0xc001d706e0, {0x8a31d3a, 0x8}, 0xc006c0a780, 0xc0021d7a90?, {0x8a2370a, 0x3})
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1913 +0x1be
k8s.io/kubernetes/test/e2e/network/netpol.testCanConnect(...)
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1897
k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.29.2()
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1569 +0x47
github.com/onsi/ginkgo/v2.By({0x8c1eb52, 0x41}, {0xc006e19e50, 0x1, 0x0?})
github.com/onsi/ginkgo/v2@v2.4.0/core_dsl.go:535 +0x525
k8s.io/kubernetes/test/e2e/network/netpol.glob..func1.2.29()
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1568 +0xb5b
github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2e8acde, 0xc000f98180})
github.com/onsi/ginkgo/v2@v2.4.0/internal/node.go:449 +0x1b
github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func2()
github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:757 +0x98
created by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode
github.com/onsi/ginkgo/v2@v2.4.0/internal/suite.go:745 +0xe3d
STEP: Cleaning up the pod client-a-xdvq7 05/22/23 21:30:03.221
STEP: Cleaning up the policy. 05/22/23 21:30:03.239
[AfterEach] NetworkPolicy between server and client
k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:96
STEP: Cleaning up the server. 05/22/23 21:30:03.25
STEP: Cleaning up the server's service. 05/22/23 21:30:03.269
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
k8s.io/kubernetes@v1.26.1/test/e2e/framework/metrics/init/init.go:33
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
dump namespaces | framework.go:196
STEP: dump namespace information after failure 05/22/23 21:30:03.352
STEP: Collecting events from namespace "e2e-network-policy-3817". 05/22/23 21:30:03.352
STEP: Found 30 events. 05/22/23 21:30:03.363
May 22 21:30:03.363: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-pf5jg: { } Scheduled: Successfully assigned e2e-network-policy-3817/client-a-pf5jg to worker02 by cp01
May 22 21:30:03.363: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-a-xdvq7: { } Scheduled: Successfully assigned e2e-network-policy-3817/client-a-xdvq7 to worker02 by cp01
May 22 21:30:03.363: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-80-blgwj: { } Scheduled: Successfully assigned e2e-network-policy-3817/client-can-connect-80-blgwj to worker02 by cp01
May 22 21:30:03.363: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for client-can-connect-81-b25hm: { } Scheduled: Successfully assigned e2e-network-policy-3817/client-can-connect-81-b25hm to worker02 by cp01
May 22 21:30:03.363: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for server-42b5r: { } Scheduled: Successfully assigned e2e-network-policy-3817/server-42b5r to worker02 by cp01
May 22 21:30:03.363: INFO: At 2023-05-22 21:28:09 +0000 UTC - event for server-42b5r: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:30:03.363: INFO: At 2023-05-22 21:28:09 +0000 UTC - event for server-42b5r: {kubelet worker02} Created: Created container server-container-80
May 22 21:30:03.363: INFO: At 2023-05-22 21:28:09 +0000 UTC - event for server-42b5r: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:30:03.363: INFO: At 2023-05-22 21:28:09 +0000 UTC - event for server-42b5r: {multus } AddedInterface: Add eth0 [fd00::4b5/128 10.128.8.89/32] from cilium
May 22 21:30:03.363: INFO: At 2023-05-22 21:28:09 +0000 UTC - event for server-42b5r: {kubelet worker02} Started: Started container server-container-80
May 22 21:30:03.363: INFO: At 2023-05-22 21:28:09 +0000 UTC - event for server-42b5r: {kubelet worker02} Created: Created container server-container-81
May 22 21:30:03.363: INFO: At 2023-05-22 21:28:09 +0000 UTC - event for server-42b5r: {kubelet worker02} Started: Started container server-container-81
May 22 21:30:03.364: INFO: At 2023-05-22 21:28:13 +0000 UTC - event for client-can-connect-80-blgwj: {multus } AddedInterface: Add eth0 [fd00::459/128 10.128.8.124/32] from cilium
May 22 21:30:03.364: INFO: At 2023-05-22 21:28:13 +0000 UTC - event for client-can-connect-80-blgwj: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:30:03.364: INFO: At 2023-05-22 21:28:15 +0000 UTC - event for client-can-connect-80-blgwj: {kubelet worker02} Started: Started container client
May 22 21:30:03.364: INFO: At 2023-05-22 21:28:15 +0000 UTC - event for client-can-connect-80-blgwj: {kubelet worker02} Created: Created container client
May 22 21:30:03.364: INFO: At 2023-05-22 21:28:19 +0000 UTC - event for client-can-connect-81-b25hm: {multus } AddedInterface: Add eth0 [fd00::4f8/128 10.128.9.36/32] from cilium
May 22 21:30:03.364: INFO: At 2023-05-22 21:28:19 +0000 UTC - event for client-can-connect-81-b25hm: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:30:03.364: INFO: At 2023-05-22 21:28:19 +0000 UTC - event for client-can-connect-81-b25hm: {kubelet worker02} Created: Created container client
May 22 21:30:03.364: INFO: At 2023-05-22 21:28:19 +0000 UTC - event for client-can-connect-81-b25hm: {kubelet worker02} Started: Started container client
May 22 21:30:03.364: INFO: At 2023-05-22 21:28:23 +0000 UTC - event for client-a-pf5jg: {kubelet worker02} Started: Started container client
May 22 21:30:03.364: INFO: At 2023-05-22 21:28:23 +0000 UTC - event for client-a-pf5jg: {kubelet worker02} Created: Created container client
May 22 21:30:03.364: INFO: At 2023-05-22 21:28:23 +0000 UTC - event for client-a-pf5jg: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:30:03.364: INFO: At 2023-05-22 21:28:23 +0000 UTC - event for client-a-pf5jg: {multus } AddedInterface: Add eth0 [fd00::4cb/128 10.128.8.166/32] from cilium
May 22 21:30:03.364: INFO: At 2023-05-22 21:29:13 +0000 UTC - event for client-a-xdvq7: {kubelet worker02} Created: Created container client
May 22 21:30:03.364: INFO: At 2023-05-22 21:29:13 +0000 UTC - event for client-a-xdvq7: {kubelet worker02} Pulled: Container image "quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-" already present on machine
May 22 21:30:03.364: INFO: At 2023-05-22 21:29:13 +0000 UTC - event for client-a-xdvq7: {multus } AddedInterface: Add eth0 [fd00::47d/128 10.128.8.239/32] from cilium
May 22 21:30:03.364: INFO: At 2023-05-22 21:29:13 +0000 UTC - event for client-a-xdvq7: {kubelet worker02} Started: Started container client
May 22 21:30:03.364: INFO: At 2023-05-22 21:30:03 +0000 UTC - event for server-42b5r: {kubelet worker02} Killing: Stopping container server-container-80
May 22 21:30:03.364: INFO: At 2023-05-22 21:30:03 +0000 UTC - event for server-42b5r: {kubelet worker02} Killing: Stopping container server-container-81
May 22 21:30:03.368: INFO: POD NODE PHASE GRACE CONDITIONS
May 22 21:30:03.368: INFO: server-42b5r worker02 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:28:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:28:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:28:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-05-22 21:28:08 +0000 UTC }]
May 22 21:30:03.368: INFO:
May 22 21:30:03.375: INFO: skipping dumping cluster info - cluster too large
[DeferCleanup (Each)] [sig-network] NetworkPolicyLegacy [LinuxOnly]
tear down framework | framework.go:193
STEP: Destroying namespace "e2e-network-policy-3817" for this suite. 05/22/23 21:30:03.375
fail [k8s.io/kubernetes@v1.26.1/test/e2e/network/netpol/network_legacy.go:1941]: May 22 21:30:03.221: Pod client-a-xdvq7 should be able to connect to service svc-server, but was not able to connect.
Pod logs:
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
TIMEOUT
Current NetworkPolicies:
[{{ } {allow-client-a-via-cidr-egress-rule e2e-network-policy-3817 9ebbfaf5-b958-4de2-aaeb-debad07e7ea9 75609 1 2023-05-22 21:29:12 +0000 UTC <nil> <nil> map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-05-22 21:29:12 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.8.89/32,Except:[],}}]}] [Egress]} {[]}} {{ } {deny-client-a-via-except-cidr-egress-rule e2e-network-policy-3817 b9a96d70-9dc0-44fb-a0f9-7a9da627ee11 73678 1 2023-05-22 21:28:22 +0000 UTC <nil> <nil> map[] map[] [] [] [{openshift-tests Update networking.k8s.io/v1 2023-05-22 21:28:22 +0000 UTC FieldsV1 {"f:spec":{"f:egress":{},"f:podSelector":{},"f:policyTypes":{}}} }]} {{map[pod-name:client-a] []} [] [{[] [{nil nil &IPBlock{CIDR:10.128.8.0/24,Except:[10.128.8.89/32],}}]}] [Egress]} {[]}}]
Pods:
[Pod: client-a-xdvq7, Status: &PodStatus{Phase:Failed,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:29:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:29:59 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:29:59 +0000 UTC,Reason:PodFailed,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:29:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.8.239,StartTime:2023-05-22 21:29:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:client,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-05-22 21:29:13 +0000 UTC,FinishedAt:2023-05-22 21:29:58 +0000 UTC,ContainerID:cri-o://2a82936d4ecec1afca47c3243b410dca7a556d0e70795c1a42187e9d25f47ae0,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://2a82936d4ecec1afca47c3243b410dca7a556d0e70795c1a42187e9d25f47ae0,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.239,},PodIP{IP:fd00::47d,},},EphemeralContainerStatuses:[]ContainerStatus{},}
Pod: server-42b5r, Status: &PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:28:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:28:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:28:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-05-22 21:28:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:192.168.200.32,PodIP:10.128.8.89,StartTime:2023-05-22 21:28:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:server-container-80,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-05-22 21:28:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://e7598b27071c4638739924174ffde3a1dfe074d6be088888a72f1e22b241a05f,Started:*true,},ContainerStatus{Name:server-container-81,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-05-22 21:28:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:quay.io/openshift/community-e2e-images:e2e-1-registry-k8s-io-e2e-test-images-agnhost-2-43-uvjrRUnF2eM_DB5-,ImageID:quay.io/openshift/community-e2e-images@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:cri-o://661ff5e8cf45a6ef472d5ef02d4ba6ea0ab053378af3a6f8d8a93ae07f303e8b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.128.8.89,},PodIP{IP:fd00::4b5,},},EphemeralContainerStatuses:[]ContainerStatus{},}
]
Ginkgo exit error 1: exit with code 1
failed: (1m56s) 2023-05-22T21:30:03 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (1m8s) 2023-05-22T21:30:21 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (2m8s) 2023-05-22T21:30:37 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (3m11s) 2023-05-22T21:31:09 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
passed: (3m48s) 2023-05-22T21:31:20 "[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]"
started: 3/67/67 "[sig-network] Service endpoints latency should not be very high [Conformance] [Serial] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]"
passed: (12.7s) 2023-05-22T21:31:33 "[sig-network] Service endpoints latency should not be very high [Conformance] [Serial] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]"
Failing tests:
[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Skipped:Network/OpenShiftSDN] [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy] [Skipped:Network/OpenShiftSDN/Multitenant] [Suite:openshift/conformance/parallel] [Suite:k8s]
error: 3 fail, 64 pass, 0 skip (6m23s)%
Hmmm, CI seems broken. We should check that out.