ovn-org / ovn-kubernetes

A robust Kubernetes networking platform
https://ovn-kubernetes.io/
Apache License 2.0
784 stars 334 forks source link

Unit tests hog node ports until all ports are terminated, but namespace is already gone #2728

Closed andreaskaris closed 2 years ago

andreaskaris commented 2 years ago

There is a minor issue in Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local or perhaps even in the k8s e2e tests.

WaitForNamespacesDeleted will already return when the namespaces are deleted, but pods are still in terminating state at this point and hogging the node ports for a good 1 - 2 minutes more:

// WaitForNamespacesDeleted waits for the namespaces to be deleted.
func WaitForNamespacesDeleted(c clientset.Interface, namespaces []string, timeout time.Duration) error {
        ginkgo.By(fmt.Sprintf("Waiting for namespaces %+v to vanish", namespaces))
        nsMap := map[string]bool{}
        for _, ns := range namespaces {
                nsMap[ns] = true
        }
        //Now POLL until all namespaces have been eradicated.
        return wait.Poll(2*time.Second, timeout,
                func() (bool, error) {
                        nsList, err := c.CoreV1().Namespaces().List(context.TODO(), metav1.ListOptions{})
                        if err != nil {
                                return false, err
                        }
                        for _, item := range nsList.Items {
                                if _, ok := nsMap[item.Name]; ok {
                                        return false, nil
                                }
                        }
                        return true, nil
                })
}

We can reproduce this by running the following test 2x in a row in a kind environment:

for i in {0..1}; do make -C test control-plane WHAT=".*Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local.*"; done

See the pod events here [0]

The second test will go into a retry loop where it first fails due to ports being hogged, and then it repeats the same issue on every rotation: tearing down the namespace -> pods in terminating state -> spawning new pods at the same time -> cannot run test successfully. See [1]

This is not biting us yet in our tests but may at some point in the future if we run node port tests back to back.

=====================================================

[0]

[root@ovnkubernetes e2e]# oc get pods -A --watch
NAMESPACE            NAME                                        READY   STATUS    RESTARTS   AGE
kube-system          coredns-74ff55c5b-wlcx8                     1/1     Running   0          172m
kube-system          coredns-74ff55c5b-x4bjc                     1/1     Running   0          172m
kube-system          etcd-ovn-control-plane                      1/1     Running   0          172m
kube-system          kube-apiserver-ovn-control-plane            1/1     Running   0          172m
kube-system          kube-controller-manager-ovn-control-plane   1/1     Running   0          172m
kube-system          kube-scheduler-ovn-control-plane            1/1     Running   0          172m
local-path-storage   local-path-provisioner-78776bfc44-vrf5r     1/1     Running   0          172m
ovn-kubernetes       ovnkube-db-0                                3/3     Running   0          171m
ovn-kubernetes       ovnkube-db-1                                3/3     Running   0          171m
ovn-kubernetes       ovnkube-db-2                                3/3     Running   0          171m
ovn-kubernetes       ovnkube-master-85567b87f7-42djm             3/3     Running   0          171m
ovn-kubernetes       ovnkube-master-85567b87f7-jr4zt             3/3     Running   0          171m
ovn-kubernetes       ovnkube-master-85567b87f7-vf9tr             3/3     Running   0          171m
ovn-kubernetes       ovnkube-node-c5th4                          3/3     Running   0          171m
ovn-kubernetes       ovnkube-node-mzwwm                          3/3     Running   0          171m
ovn-kubernetes       ovnkube-node-pl6lw                          3/3     Running   0          171m
ovn-kubernetes       ovs-node-8lf7l                              1/1     Running   0          171m
ovn-kubernetes       ovs-node-d45fv                              1/1     Running   0          171m
ovn-kubernetes       ovs-node-t7qjb                              1/1     Running   0          171m
host-to-host-test-8081   ovn-control-plane-hostnet-ep                0/1     Pending   0          0s
host-to-host-test-8081   ovn-control-plane-hostnet-ep                0/1     ContainerCreating   0          0s
host-to-host-test-8081   ovn-control-plane-hostnet-ep                1/1     Running             0          3s
host-to-host-test-8081   ovn-worker-hostnet-ep                       0/1     Pending             0          0s
host-to-host-test-8081   ovn-worker-hostnet-ep                       0/1     ContainerCreating   0          1s
host-to-host-test-8081   ovn-worker-hostnet-ep                       1/1     Running             0          2s
host-to-host-test-8081   ovn-worker2-hostnet-ep                      0/1     Pending             0          0s
host-to-host-test-8081   ovn-worker2-hostnet-ep                      0/1     ContainerCreating   0          0s
host-to-host-test-8081   ovn-worker2-hostnet-ep                      1/1     Running             0          2s
nodeport-ingress-test-1318   ovn-control-plane-hostnet-ep                0/1     Pending             0          0s
nodeport-ingress-test-1318   ovn-control-plane-hostnet-ep                0/1     ContainerCreating   0          0s
nodeport-ingress-test-1318   ovn-control-plane-hostnet-ep                0/1     Error               0          1s
nodeport-ingress-test-1318   ovn-worker-hostnet-ep                       0/1     Pending             0          0s
nodeport-ingress-test-1318   ovn-worker-hostnet-ep                       0/1     ContainerCreating   0          0s
nodeport-ingress-test-1318   ovn-worker-hostnet-ep                       0/1     Error               0          1s
nodeport-ingress-test-1318   ovn-worker2-hostnet-ep                      0/1     Pending             0          0s
nodeport-ingress-test-1318   ovn-worker2-hostnet-ep                      0/1     ContainerCreating   0          0s
nodeport-ingress-test-1318   ovn-worker2-hostnet-ep                      0/1     Error               0          1s
host-to-host-test-8081       ovn-control-plane-hostnet-ep                1/1     Terminating         0          47s
host-to-host-test-8081       ovn-worker-hostnet-ep                       1/1     Terminating         0          43s
host-to-host-test-8081       ovn-worker2-hostnet-ep                      1/1     Terminating         0          40s
host-to-host-test-8081       ovn-worker2-hostnet-ep                      0/1     Terminating         0          41s
host-to-host-test-8081       ovn-control-plane-hostnet-ep                0/1     Terminating         0          48s
host-to-host-test-8081       ovn-worker-hostnet-ep                       0/1     Terminating         0          44s
host-to-host-test-8081       ovn-worker-hostnet-ep                       0/1     Terminating         0          45s
host-to-host-test-8081       ovn-worker-hostnet-ep                       0/1     Terminating         0          45s
host-to-host-test-8081       ovn-control-plane-hostnet-ep                0/1     Terminating         0          51s
host-to-host-test-8081       ovn-control-plane-hostnet-ep                0/1     Terminating         0          51s
host-to-host-test-8081       ovn-worker2-hostnet-ep                      0/1     Terminating         0          44s
host-to-host-test-8081       ovn-worker2-hostnet-ep                      0/1     Terminating         0          44s
^C[root@ovnkubernetes e2e]# oc get^C
[root@ovnkubernetes e2e]# oc get pods -A 
NAMESPACE                    NAME                                        READY   STATUS    RESTARTS   AGE
kube-system                  coredns-74ff55c5b-wlcx8                     1/1     Running   0          174m
kube-system                  coredns-74ff55c5b-x4bjc                     1/1     Running   0          174m
kube-system                  etcd-ovn-control-plane                      1/1     Running   0          174m
kube-system                  kube-apiserver-ovn-control-plane            1/1     Running   0          174m
kube-system                  kube-controller-manager-ovn-control-plane   1/1     Running   0          174m
kube-system                  kube-scheduler-ovn-control-plane            1/1     Running   0          174m
local-path-storage           local-path-provisioner-78776bfc44-vrf5r     1/1     Running   0          174m
nodeport-ingress-test-1318   ovn-control-plane-hostnet-ep                0/1     Error     0          58s
nodeport-ingress-test-1318   ovn-worker-hostnet-ep                       0/1     Error     0          56s
nodeport-ingress-test-1318   ovn-worker2-hostnet-ep                      0/1     Error     0          54s
ovn-kubernetes               ovnkube-db-0                                3/3     Running   0          172m
ovn-kubernetes               ovnkube-db-1                                3/3     Running   0          172m
ovn-kubernetes               ovnkube-db-2                                3/3     Running   0          172m
ovn-kubernetes               ovnkube-master-85567b87f7-42djm             3/3     Running   0          172m
ovn-kubernetes               ovnkube-master-85567b87f7-jr4zt             3/3     Running   0          172m
ovn-kubernetes               ovnkube-master-85567b87f7-vf9tr             3/3     Running   0          172m
ovn-kubernetes               ovnkube-node-c5th4                          3/3     Running   0          172m
ovn-kubernetes               ovnkube-node-mzwwm                          3/3     Running   0          172m
ovn-kubernetes               ovnkube-node-pl6lw                          3/3     Running   0          172m
ovn-kubernetes               ovs-node-8lf7l                              1/1     Running   0          172m
ovn-kubernetes               ovs-node-d45fv                              1/1     Running   0          172m
ovn-kubernetes               ovs-node-t7qjb                              1/1     Running   0          172m
[root@ovnkubernetes e2e]# oc logs -n nodeport-ingress-test-1318   ovn-control-plane-hostnet-ep
2021/12/20 18:50:52 Started HTTP server on port 8085
2021/12/20 18:50:52 listen tcp :8085: bind: address already in use
[root@ovnkubernetes e2e]# 

[1]

[root@ovnkubernetes ovn-kubernetes]# for i in {0..1}; do make -C test control-plane WHAT=".*Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local.*"; done
make: Entering directory '/root/development/ovn-kubernetes/test'
E2E_REPORT_DIR=/root/development/ovn-kubernetes/test/_artifacts \
E2E_REPORT_PREFIX="control-plane"_ \
KIND_IPV4_SUPPORT=false \
KIND_IPV6_SUPPORT=false \
OVN_HA= \
./scripts/e2e-cp.sh .*Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local.*
+ export KUBERNETES_CONFORMANCE_TEST=y
+ KUBERNETES_CONFORMANCE_TEST=y
+ export KUBECONFIG=/root/admin.conf
+ KUBECONFIG=/root/admin.conf
+ IPV6_SKIPPED_TESTS='Should be allowed by externalip services|should provide connection to external host by DNS name from a pod|Should validate flow data of br-int is sent to an external gateway with netflow v5|test tainting a node according to its defaults interface MTU size'
+ SKIPPED_TESTS=
+ '[' false == true ']'
+ '[' '' == false ']'
+ '[' '' '!=' '' ']'
+ SKIPPED_TESTS+='Should validate connectivity before and after deleting all the db-pods at once in Non-HA mode|  e2e br-int NetFlow export validation'
+ '[' false == true ']'
+ export KUBE_CONTAINER_RUNTIME=remote
+ KUBE_CONTAINER_RUNTIME=remote
+ export KUBE_CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock
+ KUBE_CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock
+ export KUBE_CONTAINER_RUNTIME_NAME=containerd
+ KUBE_CONTAINER_RUNTIME_NAME=containerd
+ export NUM_NODES=2
+ NUM_NODES=2
++ sed 's/ /\\s/g'
++ echo '.*Should' be allowed to node local host-networked endpoints by nodeport services with 'externalTrafficPolicy=local.*'
+ FOCUS='.*Should\sbe\sallowed\sto\snode\slocal\shost-networked\sendpoints\sby\snodeport\sservices\swith\sexternalTrafficPolicy=local.*'
+ pushd e2e
~/development/ovn-kubernetes/test/e2e ~/development/ovn-kubernetes/test
+ go mod download
+ go test -timeout=0 -v . -ginkgo.v -ginkgo.focus '.*Should\sbe\sallowed\sto\snode\slocal\shost-networked\sendpoints\sby\snodeport\sservices\swith\sexternalTrafficPolicy=local.*' -ginkgo.flakeAttempts 2 '-ginkgo.skip=Should validate connectivity before and after deleting all the db-pods at once in Non-HA mode|  e2e br-int NetFlow export validation' -provider skeleton -kubeconfig /root/admin.conf --num-nodes=2 --report-dir=/root/development/ovn-kubernetes/test/_artifacts --report-prefix=control-plane_
=== RUN   TestE2e
I1220 18:50:09.788894 1230430 e2e_suite_test.go:61] Saving reports to /root/development/ovn-kubernetes/test/_artifacts
Running Suite: E2e Suite
========================
Random Seed: 1640026209 - Will randomize all specs
Will run 2 of 64 specs

SSSSSSSSSSS
------------------------------
host to host-networked pods traffic validation Validating Host to Host Netwoked pods traffic 
  Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2359
[BeforeEach] host to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Dec 20 18:50:09.798: INFO: >>> kubeConfig: /root/admin.conf
STEP: Building a namespace api object, basename host-to-host-test
Dec 20 18:50:09.923: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Validating Host to Host Netwoked pods traffic
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2316
STEP: Creating the endpoints pod, one for each worker
[It] Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2359
STEP: Creating the nodeport service with externalTrafficPolicy=local
STEP: Waiting for the endpoints to pop up
Dec 20 18:50:19.138: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol http
Dec 20 18:50:19.815: INFO: Validated local endpoint on node ovn-control-plane with address 172.18.0.3, and packet src IP 172.18.0.3:57808 
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol http
Dec 20 18:50:20.166: INFO: Validated local endpoint on node ovn-control-plane with address fc00:f853:ccd:e793::3, and packet src IP [fc00:f853:ccd:e793::3]:53840 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol http
Dec 20 18:50:20.395: INFO: Validated local endpoint on node ovn-worker with address 172.18.0.2, and packet src IP 172.18.0.2:45154 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol http
Dec 20 18:50:20.680: INFO: Validated local endpoint on node ovn-worker with address fc00:f853:ccd:e793::2, and packet src IP [fc00:f853:ccd:e793::2]:39458 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol http
Dec 20 18:50:20.863: INFO: Validated local endpoint on node ovn-worker2 with address 172.18.0.4, and packet src IP 172.18.0.4:57682 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol http
Dec 20 18:50:21.058: INFO: Validated local endpoint on node ovn-worker2 with address fc00:f853:ccd:e793::4, and packet src IP [fc00:f853:ccd:e793::4]:40640 
[JustAfterEach] Validating Host to Host Netwoked pods traffic
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2353
STEP: Waiting for namespaces [host-to-host-test-8081] to vanish
[AfterEach] host to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:186
Dec 20 18:50:51.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "host-to-host-test-8081" for this suite.

• [SLOW TEST:41.279 seconds]
host to host-networked pods traffic validation
/root/development/ovn-kubernetes/test/e2e/e2e.go:2293
  Validating Host to Host Netwoked pods traffic
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2315
    Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
    /root/development/ovn-kubernetes/test/e2e/e2e.go:2359
------------------------------
SS
------------------------------
e2e ingress to host-networked pods traffic validation Validating ingress traffic to Host Networked pods 
  Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2235
[BeforeEach] e2e ingress to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Dec 20 18:50:51.078: INFO: >>> kubeConfig: /root/admin.conf
STEP: Building a namespace api object, basename nodeport-ingress-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Validating ingress traffic to Host Networked pods
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2182
STEP: Creating the endpoints pod, one for each worker
Dec 20 18:50:53.260: INFO: pod nodeport-ingress-test-1318/ovn-control-plane-hostnet-ep logs:
2021/12/20 18:50:52 Started HTTP server on port 8085
2021/12/20 18:50:52 listen tcp :8085: bind: address already in use

Dec 20 18:50:55.288: INFO: pod nodeport-ingress-test-1318/ovn-worker-hostnet-ep logs:
2021/12/20 18:50:54 Started HTTP server on port 8085
2021/12/20 18:50:54 listen tcp :8085: bind: address already in use

Dec 20 18:50:57.312: INFO: pod nodeport-ingress-test-1318/ovn-worker2-hostnet-ep logs:
2021/12/20 18:50:56 Started HTTP server on port 8085
2021/12/20 18:50:56 Started UDP server on port 9095
2021/12/20 18:50:56 listen tcp :8085: bind: address already in use

STEP: Creating an external container to send the traffic from
[It] Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2235
STEP: Creating the nodeport service with externalTrafficPolicy=local
STEP: Waiting for the endpoints to pop up
Dec 20 18:50:59.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:00.278: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:01.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:02.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:03.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:04.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:05.278: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:06.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:07.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:08.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:09.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:10.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:11.278: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:12.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:13.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:14.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:15.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:16.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:17.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:18.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:19.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:20.279: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:21.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:22.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:23.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:24.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:25.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:26.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:27.278: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:28.277: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:28.285: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:51:28.293: FAIL: failed to validate endpoints for service nodeportsvclocalhostnet in namespace: nodeport-ingress-test-1318
Unexpected error:
    <*errors.errorString | 0xc000369130>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

Full Stack Trace
github.com/ovn-org/ovn-kubernetes/test/e2e.glob..func9.1.4()
    /root/development/ovn-kubernetes/test/e2e/e2e.go:2245 +0x2c5
github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00022b860)
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/leafnodes/runner.go:113 +0xba
github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000257638)
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/leafnodes/runner.go:64 +0x125
github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc00022b860)
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/leafnodes/it_node.go:26 +0x7b
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0004373b0, 0xc000257a00, {0x1cddea0, 0xc0000a8840})
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/spec/spec.go:215 +0x2a9
github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0004373b0, {0x1cddea0, 0xc0000a8840})
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/spec/spec.go:138 +0xe7
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00010d900, 0xc0004373b0)
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/specrunner/spec_runner.go:200 +0xe5
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00010d900)
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/specrunner/spec_runner.go:170 +0x1a5
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00010d900)
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/specrunner/spec_runner.go:66 +0xc5
github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00012a070, {0x7fabac97e668, 0xc00022b6c0}, {0x1ab4098, 0x20}, {0xc000054ec0, 0x2, 0x2}, {0x1d2c038, 0xc0000a8840}, ...)
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/suite/suite.go:79 +0x4d2
github.com/onsi/ginkgo.RunSpecsWithCustomReporters({0x1ce0480, 0xc00022b6c0}, {0x1ab4098, 0x9}, {0xc000054ea0, 0x2, 0x40f087})
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/ginkgo_dsl.go:219 +0x185
github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters({0x1ce0480, 0xc00022b6c0}, {0x1ab4098, 0x9}, {0xc00025ce00, 0x1, 0x1})
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/ginkgo_dsl.go:207 +0xf9
github.com/ovn-org/ovn-kubernetes/test/e2e.TestE2e(0x0)
    /root/development/ovn-kubernetes/test/e2e/e2e_suite_test.go:71 +0x2ff
testing.tRunner(0xc00022b6c0, 0x1b839a0)
    /usr/local/go/src/testing/testing.go:1259 +0x102
created by testing.(*T).Run
    /usr/local/go/src/testing/testing.go:1306 +0x35a
[JustAfterEach] Validating ingress traffic to Host Networked pods
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2229
STEP: Waiting for namespaces [nodeport-ingress-test-1318] to vanish
[AfterEach] Validating ingress traffic to Host Networked pods
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2226
[AfterEach] e2e ingress to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:186
STEP: Collecting events from namespace "nodeport-ingress-test-1318".
STEP: Found 9 events.
Dec 20 18:51:58.670: INFO: At 2021-12-20 18:50:51 +0000 UTC - event for ovn-control-plane-hostnet-ep: {kubelet ovn-control-plane} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.26" already present on machine
Dec 20 18:51:58.670: INFO: At 2021-12-20 18:50:51 +0000 UTC - event for ovn-control-plane-hostnet-ep: {kubelet ovn-control-plane} Created: Created container ovn-control-plane-hostnet-ep-container
Dec 20 18:51:58.670: INFO: At 2021-12-20 18:50:52 +0000 UTC - event for ovn-control-plane-hostnet-ep: {kubelet ovn-control-plane} Started: Started container ovn-control-plane-hostnet-ep-container
Dec 20 18:51:58.670: INFO: At 2021-12-20 18:50:53 +0000 UTC - event for ovn-worker-hostnet-ep: {kubelet ovn-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.26" already present on machine
Dec 20 18:51:58.670: INFO: At 2021-12-20 18:50:53 +0000 UTC - event for ovn-worker-hostnet-ep: {kubelet ovn-worker} Created: Created container ovn-worker-hostnet-ep-container
Dec 20 18:51:58.670: INFO: At 2021-12-20 18:50:54 +0000 UTC - event for ovn-worker-hostnet-ep: {kubelet ovn-worker} Started: Started container ovn-worker-hostnet-ep-container
Dec 20 18:51:58.670: INFO: At 2021-12-20 18:50:55 +0000 UTC - event for ovn-worker2-hostnet-ep: {kubelet ovn-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.26" already present on machine
Dec 20 18:51:58.670: INFO: At 2021-12-20 18:50:55 +0000 UTC - event for ovn-worker2-hostnet-ep: {kubelet ovn-worker2} Created: Created container ovn-worker2-hostnet-ep-container
Dec 20 18:51:58.670: INFO: At 2021-12-20 18:50:55 +0000 UTC - event for ovn-worker2-hostnet-ep: {kubelet ovn-worker2} Started: Started container ovn-worker2-hostnet-ep-container
Dec 20 18:51:58.673: INFO: POD                           NODE               PHASE   GRACE  CONDITIONS
Dec 20 18:51:58.673: INFO: ovn-control-plane-hostnet-ep  ovn-control-plane  Failed         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-12-20 18:50:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-12-20 18:50:51 +0000 UTC ContainersNotReady containers with unready status: [ovn-control-plane-hostnet-ep-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-12-20 18:50:51 +0000 UTC ContainersNotReady containers with unready status: [ovn-control-plane-hostnet-ep-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-12-20 18:50:51 +0000 UTC  }]
Dec 20 18:51:58.674: INFO: ovn-worker-hostnet-ep         ovn-worker         Failed         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-12-20 18:50:53 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-12-20 18:50:53 +0000 UTC ContainersNotReady containers with unready status: [ovn-worker-hostnet-ep-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-12-20 18:50:53 +0000 UTC ContainersNotReady containers with unready status: [ovn-worker-hostnet-ep-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-12-20 18:50:53 +0000 UTC  }]
Dec 20 18:51:58.674: INFO: ovn-worker2-hostnet-ep        ovn-worker2        Failed         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-12-20 18:50:55 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-12-20 18:50:55 +0000 UTC ContainersNotReady containers with unready status: [ovn-worker2-hostnet-ep-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-12-20 18:50:55 +0000 UTC ContainersNotReady containers with unready status: [ovn-worker2-hostnet-ep-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-12-20 18:50:55 +0000 UTC  }]
Dec 20 18:51:58.674: INFO: 
Dec 20 18:51:58.676: INFO: 
Logging node info for node ovn-control-plane
Dec 20 18:51:58.679: INFO: Node Info: &Node{ObjectMeta:{ovn-control-plane    206f3c67-9354-4050-bdb4-6d76b1b88e27 26127 0 2021-12-20 15:57:15 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux ingress-ready:true k8s.ovn.org/ovnkube-db:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ovn-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[k8s.ovn.org/host-addresses:["172.18.0.3","fc00:f853:ccd:e793::3"] k8s.ovn.org/l3-gateway-config:{"default":{"mode":"shared","interface-id":"breth0_ovn-control-plane","mac-address":"02:42:ac:12:00:03","ip-addresses":["172.18.0.3/16","fc00:f853:ccd:e793::3/64"],"next-hops":["172.18.0.1","fc00:f853:ccd:e793::1"],"node-port-enable":"true","vlan-id":"0"}} k8s.ovn.org/node-chassis-id:355b2879-f941-4629-a659-40cbcb257127 k8s.ovn.org/node-mgmt-port-mac-address:9e:6a:85:c9:3b:90 k8s.ovn.org/node-primary-ifaddr:{"ipv4":"172.18.0.3/16","ipv6":"fc00:f853:ccd:e793::3/64"} k8s.ovn.org/node-subnets:{"default":["10.244.0.0/24","fd00:10:244:3::/64"]} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/ovn/ovn-control-plane,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.0.0/24 fd00:10:244::/64],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{42936958976 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8144969728 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{42936958976 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8144969728 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-12-20 18:47:44 +0000 UTC,LastTransitionTime:2021-12-20 15:57:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-12-20 18:47:44 +0000 UTC,LastTransitionTime:2021-12-20 15:57:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-12-20 18:47:44 +0000 UTC,LastTransitionTime:2021-12-20 15:57:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-12-20 18:47:44 +0000 UTC,LastTransitionTime:2021-12-20 15:59:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:InternalIP,Address:fc00:f853:ccd:e793::3,},NodeAddress{Type:Hostname,Address:ovn-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:34cfc97e7e224d0785c64d9325a236f8,SystemUUID:3c186bbe-f7de-4df1-b597-21eae1f6799b,BootID:2c5ee056-b30f-4645-931e-95694b38bc42,KernelVersion:4.18.0-240.1.1.el8_3.x86_64,OSImage:Ubuntu Groovy Gorilla (development branch),ContainerRuntimeVersion:containerd://1.4.0,KubeletVersion:v1.20.0,KubeProxyVersion:v1.20.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost/ovn-daemonset-f:dev],SizeBytes:557840534,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.0],SizeBytes:136866161,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200725-4d6bea59],SizeBytes:118720874,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.0],SizeBytes:95511851,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.0],SizeBytes:88147263,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.0],SizeBytes:66088749,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[k8s.gcr.io/pause:3.3],SizeBytes:685708,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Dec 20 18:51:58.679: INFO: 
Logging kubelet events for node ovn-control-plane
Dec 20 18:51:58.682: INFO: 
Logging pods the kubelet thinks is on node ovn-control-plane
Dec 20 18:51:58.704: INFO: ovs-node-d45fv started at 2021-12-20 15:58:59 +0000 UTC (0+1 container statuses recorded)
Dec 20 18:51:58.704: INFO:  Container ovs-daemons ready: true, restart count 0
Dec 20 18:51:58.704: INFO: ovnkube-master-85567b87f7-vf9tr started at 2021-12-20 15:58:59 +0000 UTC (0+3 container statuses recorded)
Dec 20 18:51:58.704: INFO:  Container nbctl-daemon ready: true, restart count 0
Dec 20 18:51:58.704: INFO:  Container ovn-northd ready: true, restart count 0
Dec 20 18:51:58.704: INFO:  Container ovnkube-master ready: true, restart count 0
Dec 20 18:51:58.704: INFO: etcd-ovn-control-plane started at 2021-12-20 15:57:30 +0000 UTC (0+1 container statuses recorded)
Dec 20 18:51:58.704: INFO:  Container etcd ready: true, restart count 0
Dec 20 18:51:58.704: INFO: kube-apiserver-ovn-control-plane started at 2021-12-20 15:57:30 +0000 UTC (0+1 container statuses recorded)
Dec 20 18:51:58.704: INFO:  Container kube-apiserver ready: true, restart count 0
Dec 20 18:51:58.704: INFO: ovnkube-db-2 started at 2021-12-20 15:58:59 +0000 UTC (0+3 container statuses recorded)
Dec 20 18:51:58.704: INFO:  Container nb-ovsdb ready: true, restart count 0
Dec 20 18:51:58.704: INFO:  Container ovn-dbchecker ready: true, restart count 0
Dec 20 18:51:58.704: INFO:  Container sb-ovsdb ready: true, restart count 0
Dec 20 18:51:58.704: INFO: ovn-control-plane-hostnet-ep started at 2021-12-20 18:50:51 +0000 UTC (0+1 container statuses recorded)
Dec 20 18:51:58.704: INFO:  Container ovn-control-plane-hostnet-ep-container ready: false, restart count 0
Dec 20 18:51:58.704: INFO: kube-controller-manager-ovn-control-plane started at 2021-12-20 15:57:30 +0000 UTC (0+1 container statuses recorded)
Dec 20 18:51:58.704: INFO:  Container kube-controller-manager ready: true, restart count 0
Dec 20 18:51:58.704: INFO: kube-scheduler-ovn-control-plane started at 2021-12-20 15:57:30 +0000 UTC (0+1 container statuses recorded)
Dec 20 18:51:58.704: INFO:  Container kube-scheduler ready: true, restart count 0
Dec 20 18:51:58.704: INFO: ovnkube-node-mzwwm started at 2021-12-20 15:59:03 +0000 UTC (0+3 container statuses recorded)
Dec 20 18:51:58.704: INFO:  Container ovn-controller ready: true, restart count 0
Dec 20 18:51:58.704: INFO:  Container ovnkube-node ready: true, restart count 0
Dec 20 18:51:58.704: INFO:  Container ovs-metrics-exporter ready: true, restart count 0
Dec 20 18:51:58.917: INFO: 
Latency metrics for node ovn-control-plane
Dec 20 18:51:58.917: INFO: 
Logging node info for node ovn-worker
Dec 20 18:51:58.920: INFO: Node Info: &Node{ObjectMeta:{ovn-worker    4d715f0c-0a64-48c1-94a3-ce250a0dde5d 26207 0 2021-12-20 15:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux k8s.ovn.org/ovnkube-db:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ovn-worker kubernetes.io/os:linux node-role.kubernetes.io/master:] map[k8s.ovn.org/host-addresses:["172.18.0.2","fc00:f853:ccd:e793::2"] k8s.ovn.org/l3-gateway-config:{"default":{"mode":"shared","interface-id":"breth0_ovn-worker","mac-address":"02:42:ac:12:00:02","ip-addresses":["172.18.0.2/16","fc00:f853:ccd:e793::2/64"],"next-hops":["172.18.0.1","fc00:f853:ccd:e793::1"],"node-port-enable":"true","vlan-id":"0"}} k8s.ovn.org/node-chassis-id:01fc043c-2571-4def-bb6e-252ada9a4d03 k8s.ovn.org/node-mgmt-port-mac-address:aa:83:43:98:8e:01 k8s.ovn.org/node-primary-ifaddr:{"ipv4":"172.18.0.2/16","ipv6":"fc00:f853:ccd:e793::2/64"} k8s.ovn.org/node-subnets:{"default":["10.244.2.0/24","fd00:10:244:2::/64"]} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/ovn/ovn-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24 fd00:10:244:1::/64],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{42936958976 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8144969728 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{42936958976 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8144969728 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-12-20 18:47:54 +0000 UTC,LastTransitionTime:2021-12-20 15:57:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-12-20 18:47:54 +0000 UTC,LastTransitionTime:2021-12-20 15:57:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-12-20 18:47:54 +0000 UTC,LastTransitionTime:2021-12-20 15:57:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-12-20 18:47:54 +0000 UTC,LastTransitionTime:2021-12-20 15:59:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:InternalIP,Address:fc00:f853:ccd:e793::2,},NodeAddress{Type:Hostname,Address:ovn-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e42c35b6124b401da80299f99ce722c8,SystemUUID:82e66c63-cc7d-43cc-ac8f-1f19170c19a5,BootID:2c5ee056-b30f-4645-931e-95694b38bc42,KernelVersion:4.18.0-240.1.1.el8_3.x86_64,OSImage:Ubuntu Groovy Gorilla (development branch),ContainerRuntimeVersion:containerd://1.4.0,KubeletVersion:v1.20.0,KubeProxyVersion:v1.20.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost/ovn-daemonset-f:dev],SizeBytes:557840534,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.0],SizeBytes:136866161,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200725-4d6bea59],SizeBytes:118720874,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.0],SizeBytes:95511851,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.0],SizeBytes:88147263,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.0],SizeBytes:66088749,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[k8s.gcr.io/pause:3.3],SizeBytes:685708,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Dec 20 18:51:58.921: INFO: 
Logging kubelet events for node ovn-worker
Dec 20 18:51:58.922: INFO: 
Logging pods the kubelet thinks is on node ovn-worker
Dec 20 18:51:58.928: INFO: ovnkube-db-0 started at 2021-12-20 15:58:59 +0000 UTC (0+3 container statuses recorded)
Dec 20 18:51:58.928: INFO:  Container nb-ovsdb ready: true, restart count 0
Dec 20 18:51:58.928: INFO:  Container ovn-dbchecker ready: true, restart count 0
Dec 20 18:51:58.928: INFO:  Container sb-ovsdb ready: true, restart count 0
Dec 20 18:51:58.928: INFO: ovs-node-8lf7l started at 2021-12-20 15:58:59 +0000 UTC (0+1 container statuses recorded)
Dec 20 18:51:58.928: INFO:  Container ovs-daemons ready: true, restart count 0
Dec 20 18:51:58.928: INFO: ovnkube-master-85567b87f7-42djm started at 2021-12-20 15:58:59 +0000 UTC (0+3 container statuses recorded)
Dec 20 18:51:58.928: INFO:  Container nbctl-daemon ready: true, restart count 0
Dec 20 18:51:58.928: INFO:  Container ovn-northd ready: true, restart count 0
Dec 20 18:51:58.928: INFO:  Container ovnkube-master ready: true, restart count 0
Dec 20 18:51:58.928: INFO: ovnkube-node-c5th4 started at 2021-12-20 15:59:03 +0000 UTC (0+3 container statuses recorded)
Dec 20 18:51:58.928: INFO:  Container ovn-controller ready: true, restart count 0
Dec 20 18:51:58.928: INFO:  Container ovnkube-node ready: true, restart count 0
Dec 20 18:51:58.928: INFO:  Container ovs-metrics-exporter ready: true, restart count 0
Dec 20 18:51:58.928: INFO: local-path-provisioner-78776bfc44-vrf5r started at 2021-12-20 15:59:40 +0000 UTC (0+1 container statuses recorded)
Dec 20 18:51:58.928: INFO:  Container local-path-provisioner ready: true, restart count 0
Dec 20 18:51:58.928: INFO: ovn-worker-hostnet-ep started at 2021-12-20 18:50:53 +0000 UTC (0+1 container statuses recorded)
Dec 20 18:51:58.928: INFO:  Container ovn-worker-hostnet-ep-container ready: false, restart count 0
Dec 20 18:51:59.021: INFO: 
Latency metrics for node ovn-worker
Dec 20 18:51:59.021: INFO: 
Logging node info for node ovn-worker2
Dec 20 18:51:59.023: INFO: Node Info: &Node{ObjectMeta:{ovn-worker2    46c8a5a9-3adc-4ada-8c6d-d2182d05fd80 26208 0 2021-12-20 15:57:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux k8s.ovn.org/ovnkube-db:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ovn-worker2 kubernetes.io/os:linux node-role.kubernetes.io/master:] map[k8s.ovn.org/host-addresses:["172.18.0.4","fc00:f853:ccd:e793::4"] k8s.ovn.org/l3-gateway-config:{"default":{"mode":"shared","interface-id":"breth0_ovn-worker2","mac-address":"02:42:ac:12:00:04","ip-addresses":["172.18.0.4/16","fc00:f853:ccd:e793::4/64"],"next-hops":["172.18.0.1","fc00:f853:ccd:e793::1"],"node-port-enable":"true","vlan-id":"0"}} k8s.ovn.org/node-chassis-id:2fcfc4e9-3861-4631-aa91-7cbf960bfc07 k8s.ovn.org/node-mgmt-port-mac-address:92:90:2f:b7:97:47 k8s.ovn.org/node-primary-ifaddr:{"ipv4":"172.18.0.4/16","ipv6":"fc00:f853:ccd:e793::4/64"} k8s.ovn.org/node-subnets:{"default":["10.244.1.0/24","fd00:10:244:1::/64"]} k8s.ovn.org/topology-version:5 kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/ovn/ovn-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24 fd00:10:244:2::/64],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{42936958976 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8144969728 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{42936958976 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8144969728 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-12-20 18:47:54 +0000 UTC,LastTransitionTime:2021-12-20 15:57:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-12-20 18:47:54 +0000 UTC,LastTransitionTime:2021-12-20 15:57:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-12-20 18:47:54 +0000 UTC,LastTransitionTime:2021-12-20 15:57:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-12-20 18:47:54 +0000 UTC,LastTransitionTime:2021-12-20 15:59:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:InternalIP,Address:fc00:f853:ccd:e793::4,},NodeAddress{Type:Hostname,Address:ovn-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c057d1a78470427fb892d6481a0cecc7,SystemUUID:360f3d5a-f154-4bf5-848f-93b4eab587f3,BootID:2c5ee056-b30f-4645-931e-95694b38bc42,KernelVersion:4.18.0-240.1.1.el8_3.x86_64,OSImage:Ubuntu Groovy Gorilla (development branch),ContainerRuntimeVersion:containerd://1.4.0,KubeletVersion:v1.20.0,KubeProxyVersion:v1.20.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[localhost/ovn-daemonset-f:dev],SizeBytes:557840534,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.20.0],SizeBytes:136866161,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20200725-4d6bea59],SizeBytes:118720874,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.20.0],SizeBytes:95511851,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.20.0],SizeBytes:88147263,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.20.0],SizeBytes:66088749,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-base:v2.1.0],SizeBytes:53876619,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.14],SizeBytes:41982521,},ContainerImage{Names:[k8s.gcr.io/pause:3.3],SizeBytes:685708,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Dec 20 18:51:59.024: INFO: 
Logging kubelet events for node ovn-worker2
Dec 20 18:51:59.025: INFO: 
Logging pods the kubelet thinks is on node ovn-worker2
Dec 20 18:51:59.034: INFO: ovnkube-db-1 started at 2021-12-20 15:58:59 +0000 UTC (0+3 container statuses recorded)
Dec 20 18:51:59.034: INFO:  Container nb-ovsdb ready: true, restart count 0
Dec 20 18:51:59.034: INFO:  Container ovn-dbchecker ready: true, restart count 0
Dec 20 18:51:59.034: INFO:  Container sb-ovsdb ready: true, restart count 0
Dec 20 18:51:59.034: INFO: ovs-node-t7qjb started at 2021-12-20 15:58:59 +0000 UTC (0+1 container statuses recorded)
Dec 20 18:51:59.034: INFO:  Container ovs-daemons ready: true, restart count 0
Dec 20 18:51:59.034: INFO: ovnkube-master-85567b87f7-jr4zt started at 2021-12-20 15:58:59 +0000 UTC (0+3 container statuses recorded)
Dec 20 18:51:59.034: INFO:  Container nbctl-daemon ready: true, restart count 0
Dec 20 18:51:59.034: INFO:  Container ovn-northd ready: true, restart count 0
Dec 20 18:51:59.034: INFO:  Container ovnkube-master ready: true, restart count 0
Dec 20 18:51:59.034: INFO: ovnkube-node-pl6lw started at 2021-12-20 15:59:03 +0000 UTC (0+3 container statuses recorded)
Dec 20 18:51:59.034: INFO:  Container ovn-controller ready: true, restart count 0
Dec 20 18:51:59.034: INFO:  Container ovnkube-node ready: true, restart count 0
Dec 20 18:51:59.034: INFO:  Container ovs-metrics-exporter ready: true, restart count 0
Dec 20 18:51:59.034: INFO: ovn-worker2-hostnet-ep started at 2021-12-20 18:50:55 +0000 UTC (0+1 container statuses recorded)
Dec 20 18:51:59.034: INFO:  Container ovn-worker2-hostnet-ep-container ready: false, restart count 0
Dec 20 18:51:59.034: INFO: coredns-74ff55c5b-x4bjc started at 2021-12-20 15:59:40 +0000 UTC (0+1 container statuses recorded)
Dec 20 18:51:59.034: INFO:  Container coredns ready: true, restart count 0
Dec 20 18:51:59.034: INFO: coredns-74ff55c5b-wlcx8 started at 2021-12-20 15:59:40 +0000 UTC (0+1 container statuses recorded)
Dec 20 18:51:59.034: INFO:  Container coredns ready: true, restart count 0
Dec 20 18:51:59.136: INFO: 
Latency metrics for node ovn-worker2
Dec 20 18:51:59.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nodeport-ingress-test-1318" for this suite.

• Failure [68.063 seconds]
e2e ingress to host-networked pods traffic validation
/root/development/ovn-kubernetes/test/e2e/e2e.go:2154
  Validating ingress traffic to Host Networked pods
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2181
    Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local [It]
    /root/development/ovn-kubernetes/test/e2e/e2e.go:2235

    Dec 20 18:51:28.293: failed to validate endpoints for service nodeportsvclocalhostnet in namespace: nodeport-ingress-test-1318
    Unexpected error:
        <*errors.errorString | 0xc000369130>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

    /root/development/ovn-kubernetes/test/e2e/e2e.go:2245
------------------------------
e2e ingress to host-networked pods traffic validation Validating ingress traffic to Host Networked pods 
  Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2235
[BeforeEach] e2e ingress to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Dec 20 18:51:59.141: INFO: >>> kubeConfig: /root/admin.conf
STEP: Building a namespace api object, basename nodeport-ingress-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Validating ingress traffic to Host Networked pods
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2182
STEP: Creating the endpoints pod, one for each worker
STEP: Creating an external container to send the traffic from
[It] Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2235
STEP: Creating the nodeport service with externalTrafficPolicy=local
STEP: Waiting for the endpoints to pop up
Dec 20 18:52:07.431: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol http
Dec 20 18:52:08.258: INFO: Validated local endpoint on node ovn-control-plane with address 172.18.0.3, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol http
Dec 20 18:52:08.829: INFO: Validated local endpoint on node ovn-control-plane with address fc00:f853:ccd:e793::3, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol http
Dec 20 18:52:09.476: INFO: Validated local endpoint on node ovn-worker with address 172.18.0.2, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol http
Dec 20 18:52:10.173: INFO: Validated local endpoint on node ovn-worker with address fc00:f853:ccd:e793::2, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol http
Dec 20 18:52:10.859: INFO: Validated local endpoint on node ovn-worker2 with address 172.18.0.4, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol http
Dec 20 18:52:11.475: INFO: Validated local endpoint on node ovn-worker2 with address fc00:f853:ccd:e793::4, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol udp
Dec 20 18:52:12.186: INFO: Validated local endpoint on node ovn-control-plane with address 172.18.0.3, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol udp
Dec 20 18:52:12.865: INFO: Validated local endpoint on node ovn-control-plane with address fc00:f853:ccd:e793::3, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol udp
Dec 20 18:52:13.502: INFO: Validated local endpoint on node ovn-worker with address 172.18.0.2, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol udp
Dec 20 18:52:14.181: INFO: Validated local endpoint on node ovn-worker with address fc00:f853:ccd:e793::2, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol udp
Dec 20 18:52:14.863: INFO: Validated local endpoint on node ovn-worker2 with address 172.18.0.4, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol udp
Dec 20 18:52:15.517: INFO: Validated local endpoint on node ovn-worker2 with address fc00:f853:ccd:e793::4, and packet src IP fc00:f853:ccd:e793::5 
[JustAfterEach] Validating ingress traffic to Host Networked pods
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2229
STEP: Waiting for namespaces [nodeport-ingress-test-694] to vanish
[AfterEach] Validating ingress traffic to Host Networked pods
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2226
[AfterEach] e2e ingress to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:186
Dec 20 18:52:45.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nodeport-ingress-test-694" for this suite.

• [SLOW TEST:46.867 seconds]
e2e ingress to host-networked pods traffic validation
/root/development/ovn-kubernetes/test/e2e/e2e.go:2154
  Validating ingress traffic to Host Networked pods
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2181
    Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
    /root/development/ovn-kubernetes/test/e2e/e2e.go:2235
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
JUnit report was created: /root/development/ovn-kubernetes/test/_artifacts/junit_control-plane_01.xml

Summarizing 1 Failure:

[Fail] e2e ingress to host-networked pods traffic validation Validating ingress traffic to Host Networked pods [It] Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local 
/root/development/ovn-kubernetes/test/e2e/e2e.go:2245

Ran 2 of 64 Specs in 156.211 seconds
SUCCESS! -- 2 Passed | 0 Failed | 1 Flaked | 0 Pending | 62 Skipped
--- PASS: TestE2e (156.23s)
PASS
ok      github.com/ovn-org/ovn-kubernetes/test/e2e  156.253s
+ popd
~/development/ovn-kubernetes/test
make: Leaving directory '/root/development/ovn-kubernetes/test'
make: Entering directory '/root/development/ovn-kubernetes/test'
E2E_REPORT_DIR=/root/development/ovn-kubernetes/test/_artifacts \
E2E_REPORT_PREFIX="control-plane"_ \
KIND_IPV4_SUPPORT=false \
KIND_IPV6_SUPPORT=false \
OVN_HA= \
./scripts/e2e-cp.sh .*Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local.*
+ export KUBERNETES_CONFORMANCE_TEST=y
+ KUBERNETES_CONFORMANCE_TEST=y
+ export KUBECONFIG=/root/admin.conf
+ KUBECONFIG=/root/admin.conf
+ IPV6_SKIPPED_TESTS='Should be allowed by externalip services|should provide connection to external host by DNS name from a pod|Should validate flow data of br-int is sent to an external gateway with netflow v5|test tainting a node according to its defaults interface MTU size'
+ SKIPPED_TESTS=
+ '[' false == true ']'
+ '[' '' == false ']'
+ '[' '' '!=' '' ']'
+ SKIPPED_TESTS+='Should validate connectivity before and after deleting all the db-pods at once in Non-HA mode|  e2e br-int NetFlow export validation'
+ '[' false == true ']'
+ export KUBE_CONTAINER_RUNTIME=remote
+ KUBE_CONTAINER_RUNTIME=remote
+ export KUBE_CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock
+ KUBE_CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock
+ export KUBE_CONTAINER_RUNTIME_NAME=containerd
+ KUBE_CONTAINER_RUNTIME_NAME=containerd
+ export NUM_NODES=2
+ NUM_NODES=2
++ sed 's/ /\\s/g'
++ echo '.*Should' be allowed to node local host-networked endpoints by nodeport services with 'externalTrafficPolicy=local.*'
+ FOCUS='.*Should\sbe\sallowed\sto\snode\slocal\shost-networked\sendpoints\sby\snodeport\sservices\swith\sexternalTrafficPolicy=local.*'
+ pushd e2e
~/development/ovn-kubernetes/test/e2e ~/development/ovn-kubernetes/test
+ go mod download
+ go test -timeout=0 -v . -ginkgo.v -ginkgo.focus '.*Should\sbe\sallowed\sto\snode\slocal\shost-networked\sendpoints\sby\snodeport\sservices\swith\sexternalTrafficPolicy=local.*' -ginkgo.flakeAttempts 2 '-ginkgo.skip=Should validate connectivity before and after deleting all the db-pods at once in Non-HA mode|  e2e br-int NetFlow export validation' -provider skeleton -kubeconfig /root/admin.conf --num-nodes=2 --report-dir=/root/development/ovn-kubernetes/test/_artifacts --report-prefix=control-plane_
=== RUN   TestE2e
I1220 18:52:48.568215 1238182 e2e_suite_test.go:61] Saving reports to /root/development/ovn-kubernetes/test/_artifacts
Running Suite: E2e Suite
========================
Random Seed: 1640026368 - Will randomize all specs
Will run 2 of 64 specs

SSSSSSSSSSSSSSSSSSSSSS
------------------------------
host to host-networked pods traffic validation Validating Host to Host Netwoked pods traffic 
  Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2359
[BeforeEach] host to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Dec 20 18:52:48.578: INFO: >>> kubeConfig: /root/admin.conf
STEP: Building a namespace api object, basename host-to-host-test
Dec 20 18:52:48.615: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Validating Host to Host Netwoked pods traffic
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2316
STEP: Creating the endpoints pod, one for each worker
Dec 20 18:52:52.666: INFO: pod host-to-host-test-8081/ovn-control-plane-hostnet-ep logs:
2021/12/20 18:52:49 Started HTTP server on port 8085
2021/12/20 18:52:49 Started UDP server on port 8081
2021/12/20 18:52:49 listen tcp :8085: bind: address already in use

[It] Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2359
STEP: Creating the nodeport service with externalTrafficPolicy=local
STEP: Waiting for the endpoints to pop up
Dec 20 18:52:57.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:52:58.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:52:59.761: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:00.761: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:01.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:02.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:03.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:04.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:05.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:06.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:07.761: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:08.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:09.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:10.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:11.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:12.761: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:13.761: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:14.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:15.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:16.761: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:17.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:18.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:19.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:20.761: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:21.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:22.761: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:23.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:24.761: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:25.761: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:26.760: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:26.769: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
Dec 20 18:53:26.774: FAIL: failed to validate endpoints for service nodeportsvclocalhostnet in namespace: host-to-host-test-8081
Unexpected error:
    <*errors.errorString | 0xc000341140>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

Full Stack Trace
github.com/ovn-org/ovn-kubernetes/test/e2e.glob..func10.1.3()
    /root/development/ovn-kubernetes/test/e2e/e2e.go:2369 +0x2bd
github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000500820)
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/leafnodes/runner.go:113 +0xba
github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000599638)
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/leafnodes/runner.go:64 +0x125
github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc000500820)
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/leafnodes/it_node.go:26 +0x7b
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0000200f0, 0xc000599a00, {0x1cddea0, 0xc00007e840})
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/spec/spec.go:215 +0x2a9
github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0000200f0, {0x1cddea0, 0xc00007e840})
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/spec/spec.go:138 +0xe7
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0003f4dc0, 0xc0000200f0)
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/specrunner/spec_runner.go:200 +0xe5
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0003f4dc0)
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/specrunner/spec_runner.go:170 +0x1a5
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0003f4dc0)
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/specrunner/spec_runner.go:66 +0xc5
github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000102070, {0x7fe9c80e68d0, 0xc000425860}, {0x1ab4098, 0x20}, {0xc000398040, 0x2, 0x2}, {0x1d2c038, 0xc00007e840}, ...)
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/internal/suite/suite.go:79 +0x4d2
github.com/onsi/ginkgo.RunSpecsWithCustomReporters({0x1ce0480, 0xc000425860}, {0x1ab4098, 0x9}, {0xc000398020, 0x2, 0x40f087})
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/ginkgo_dsl.go:219 +0x185
github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters({0x1ce0480, 0xc000425860}, {0x1ab4098, 0x9}, {0xc00024e020, 0x1, 0x1})
    /root/go/pkg/mod/github.com/onsi/ginkgo@v1.14.0/ginkgo_dsl.go:207 +0xf9
github.com/ovn-org/ovn-kubernetes/test/e2e.TestE2e(0x0)
    /root/development/ovn-kubernetes/test/e2e/e2e_suite_test.go:71 +0x2ff
testing.tRunner(0xc000425860, 0x1b839a0)
    /usr/local/go/src/testing/testing.go:1259 +0x102
created by testing.(*T).Run
    /usr/local/go/src/testing/testing.go:1306 +0x35a
[JustAfterEach] Validating Host to Host Netwoked pods traffic
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2353
(...)
andreaskaris commented 2 years ago

After my change:

[root@ovnkubernetes ovn-kubernetes]# for i in {0..1}; do make -C test control-plane WHAT=".*Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local.*"; done
make: Entering directory '/root/development/ovn-kubernetes/test'
E2E_REPORT_DIR=/root/development/ovn-kubernetes/test/_artifacts \
E2E_REPORT_PREFIX="control-plane"_ \
KIND_IPV4_SUPPORT=false \
KIND_IPV6_SUPPORT=false \
OVN_HA= \
./scripts/e2e-cp.sh .*Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local.*
+ export KUBERNETES_CONFORMANCE_TEST=y
+ KUBERNETES_CONFORMANCE_TEST=y
+ export KUBECONFIG=/root/admin.conf
+ KUBECONFIG=/root/admin.conf
+ IPV6_SKIPPED_TESTS='Should be allowed by externalip services|should provide connection to external host by DNS name from a pod|Should validate flow data of br-int is sent to an external gateway with netflow v5|test tainting a node according to its defaults interface MTU size'
+ SKIPPED_TESTS=
+ '[' false == true ']'
+ '[' '' == false ']'
+ '[' '' '!=' '' ']'
+ SKIPPED_TESTS+='Should validate connectivity before and after deleting all the db-pods at once in Non-HA mode|  e2e br-int NetFlow export validation'
+ '[' false == true ']'
+ export KUBE_CONTAINER_RUNTIME=remote
+ KUBE_CONTAINER_RUNTIME=remote
+ export KUBE_CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock
+ KUBE_CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock
+ export KUBE_CONTAINER_RUNTIME_NAME=containerd
+ KUBE_CONTAINER_RUNTIME_NAME=containerd
+ export NUM_NODES=2
+ NUM_NODES=2
++ sed 's/ /\\s/g'
++ echo '.*Should' be allowed to node local host-networked endpoints by nodeport services with 'externalTrafficPolicy=local.*'
+ FOCUS='.*Should\sbe\sallowed\sto\snode\slocal\shost-networked\sendpoints\sby\snodeport\sservices\swith\sexternalTrafficPolicy=local.*'
+ pushd e2e
~/development/ovn-kubernetes/test/e2e ~/development/ovn-kubernetes/test
+ go mod download
+ go test -timeout=0 -v . -ginkgo.v -ginkgo.focus '.*Should\sbe\sallowed\sto\snode\slocal\shost-networked\sendpoints\sby\snodeport\sservices\swith\sexternalTrafficPolicy=local.*' -ginkgo.flakeAttempts 2 '-ginkgo.skip=Should validate connectivity before and after deleting all the db-pods at once in Non-HA mode|  e2e br-int NetFlow export validation' -provider skeleton -kubeconfig /root/admin.conf --num-nodes=2 --report-dir=/root/development/ovn-kubernetes/test/_artifacts --report-prefix=control-plane_
=== RUN   TestE2e
I1221 12:24:09.238351  412115 e2e_suite_test.go:61] Saving reports to /root/development/ovn-kubernetes/test/_artifacts
Running Suite: E2e Suite
========================
Random Seed: 1640089449 - Will randomize all specs
Will run 2 of 64 specs

SSSSS
------------------------------
e2e ingress to host-networked pods traffic validation Validating ingress traffic to Host Networked pods 
  Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2237
[BeforeEach] e2e ingress to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Dec 21 12:24:09.251: INFO: >>> kubeConfig: /root/admin.conf
STEP: Building a namespace api object, basename nodeport-ingress-test
Dec 21 12:24:09.353: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Validating ingress traffic to Host Networked pods
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2182
STEP: Making sure that all invalid namespaces with pattern '^host-to-host-test.*|^nodeport-ingress-test.*' are deleted
STEP: Creating the endpoints pod, one for each worker
STEP: Creating an external container to send the traffic from
[It] Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2237
STEP: Creating the nodeport service with externalTrafficPolicy=local
STEP: Waiting for the endpoints to pop up
Dec 21 12:24:17.768: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol http
Dec 21 12:24:18.410: INFO: Validated local endpoint on node ovn-control-plane with address 172.18.0.3, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol http
Dec 21 12:24:19.015: INFO: Validated local endpoint on node ovn-control-plane with address fc00:f853:ccd:e793::3, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol http
Dec 21 12:24:19.570: INFO: Validated local endpoint on node ovn-worker with address 172.18.0.4, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol http
Dec 21 12:24:20.151: INFO: Validated local endpoint on node ovn-worker with address fc00:f853:ccd:e793::4, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol http
Dec 21 12:24:20.678: INFO: Validated local endpoint on node ovn-worker2 with address 172.18.0.2, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol http
Dec 21 12:24:21.188: INFO: Validated local endpoint on node ovn-worker2 with address fc00:f853:ccd:e793::2, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol udp
Dec 21 12:24:21.746: INFO: Validated local endpoint on node ovn-control-plane with address 172.18.0.3, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol udp
Dec 21 12:24:22.263: INFO: Validated local endpoint on node ovn-control-plane with address fc00:f853:ccd:e793::3, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol udp
Dec 21 12:24:22.780: INFO: Validated local endpoint on node ovn-worker with address 172.18.0.4, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol udp
Dec 21 12:24:23.302: INFO: Validated local endpoint on node ovn-worker with address fc00:f853:ccd:e793::4, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol udp
Dec 21 12:24:23.828: INFO: Validated local endpoint on node ovn-worker2 with address 172.18.0.2, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol udp
Dec 21 12:24:24.370: INFO: Validated local endpoint on node ovn-worker2 with address fc00:f853:ccd:e793::2, and packet src IP fc00:f853:ccd:e793::5 
[AfterEach] Validating ingress traffic to Host Networked pods
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2232
[AfterEach] e2e ingress to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:186
Dec 21 12:24:24.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nodeport-ingress-test-8081" for this suite.

• [SLOW TEST:15.571 seconds]
e2e ingress to host-networked pods traffic validation
/root/development/ovn-kubernetes/test/e2e/e2e.go:2154
  Validating ingress traffic to Host Networked pods
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2181
    Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
    /root/development/ovn-kubernetes/test/e2e/e2e.go:2237
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
host to host-networked pods traffic validation Validating Host to Host Netwoked pods traffic 
  Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2363
[BeforeEach] host to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Dec 21 12:24:24.822: INFO: >>> kubeConfig: /root/admin.conf
STEP: Building a namespace api object, basename host-to-host-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Validating Host to Host Netwoked pods traffic
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2318
STEP: Making sure that all invalid namespaces with pattern '^host-to-host-test.*|^nodeport-ingress-test.*' are deleted
Dec 21 12:24:25.062: INFO: Assuring that namespace nodeport-ingress-test-8081 is deleted.
STEP: Waiting for namespaces [nodeport-ingress-test-8081] to vanish
STEP: Creating the endpoints pod, one for each worker
[It] Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2363
STEP: Creating the nodeport service with externalTrafficPolicy=local
STEP: Waiting for the endpoints to pop up
Dec 21 12:24:50.392: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol http
Dec 21 12:24:50.657: INFO: Validated local endpoint on node ovn-control-plane with address 172.18.0.3, and packet src IP 172.18.0.3:50946 
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol http
Dec 21 12:24:50.861: INFO: Validated local endpoint on node ovn-control-plane with address fc00:f853:ccd:e793::3, and packet src IP [fc00:f853:ccd:e793::3]:60456 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol http
Dec 21 12:24:51.053: INFO: Validated local endpoint on node ovn-worker with address 172.18.0.4, and packet src IP 172.18.0.4:37408 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol http
Dec 21 12:24:51.238: INFO: Validated local endpoint on node ovn-worker with address fc00:f853:ccd:e793::4, and packet src IP [fc00:f853:ccd:e793::4]:43706 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol http
Dec 21 12:24:51.418: INFO: Validated local endpoint on node ovn-worker2 with address 172.18.0.2, and packet src IP 172.18.0.2:59696 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol http
Dec 21 12:24:51.623: INFO: Validated local endpoint on node ovn-worker2 with address fc00:f853:ccd:e793::2, and packet src IP [fc00:f853:ccd:e793::2]:56276 
[AfterEach] host to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:186
Dec 21 12:24:51.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "host-to-host-test-1318" for this suite.

• [SLOW TEST:26.806 seconds]
host to host-networked pods traffic validation
/root/development/ovn-kubernetes/test/e2e/e2e.go:2295
  Validating Host to Host Netwoked pods traffic
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2317
    Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
    /root/development/ovn-kubernetes/test/e2e/e2e.go:2363
------------------------------

JUnit report was created: /root/development/ovn-kubernetes/test/_artifacts/junit_control-plane_01.xml

Ran 2 of 64 Specs in 42.378 seconds
SUCCESS! -- 2 Passed | 0 Failed | 0 Flaked | 0 Pending | 62 Skipped
--- PASS: TestE2e (42.40s)
PASS
ok      github.com/ovn-org/ovn-kubernetes/test/e2e  42.420s
+ popd
~/development/ovn-kubernetes/test
make: Leaving directory '/root/development/ovn-kubernetes/test'
make: Entering directory '/root/development/ovn-kubernetes/test'
E2E_REPORT_DIR=/root/development/ovn-kubernetes/test/_artifacts \
E2E_REPORT_PREFIX="control-plane"_ \
KIND_IPV4_SUPPORT=false \
KIND_IPV6_SUPPORT=false \
OVN_HA= \
./scripts/e2e-cp.sh .*Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local.*
+ export KUBERNETES_CONFORMANCE_TEST=y
+ KUBERNETES_CONFORMANCE_TEST=y
+ export KUBECONFIG=/root/admin.conf
+ KUBECONFIG=/root/admin.conf
+ IPV6_SKIPPED_TESTS='Should be allowed by externalip services|should provide connection to external host by DNS name from a pod|Should validate flow data of br-int is sent to an external gateway with netflow v5|test tainting a node according to its defaults interface MTU size'
+ SKIPPED_TESTS=
+ '[' false == true ']'
+ '[' '' == false ']'
+ '[' '' '!=' '' ']'
+ SKIPPED_TESTS+='Should validate connectivity before and after deleting all the db-pods at once in Non-HA mode|  e2e br-int NetFlow export validation'
+ '[' false == true ']'
+ export KUBE_CONTAINER_RUNTIME=remote
+ KUBE_CONTAINER_RUNTIME=remote
+ export KUBE_CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock
+ KUBE_CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock
+ export KUBE_CONTAINER_RUNTIME_NAME=containerd
+ KUBE_CONTAINER_RUNTIME_NAME=containerd
+ export NUM_NODES=2
+ NUM_NODES=2
++ sed 's/ /\\s/g'
++ echo '.*Should' be allowed to node local host-networked endpoints by nodeport services with 'externalTrafficPolicy=local.*'
+ FOCUS='.*Should\sbe\sallowed\sto\snode\slocal\shost-networked\sendpoints\sby\snodeport\sservices\swith\sexternalTrafficPolicy=local.*'
+ pushd e2e
~/development/ovn-kubernetes/test/e2e ~/development/ovn-kubernetes/test
+ go mod download
+ go test -timeout=0 -v . -ginkgo.v -ginkgo.focus '.*Should\sbe\sallowed\sto\snode\slocal\shost-networked\sendpoints\sby\snodeport\sservices\swith\sexternalTrafficPolicy=local.*' -ginkgo.flakeAttempts 2 '-ginkgo.skip=Should validate connectivity before and after deleting all the db-pods at once in Non-HA mode|  e2e br-int NetFlow export validation' -provider skeleton -kubeconfig /root/admin.conf --num-nodes=2 --report-dir=/root/development/ovn-kubernetes/test/_artifacts --report-prefix=control-plane_
=== RUN   TestE2e
I1221 12:24:54.056912  416042 e2e_suite_test.go:61] Saving reports to /root/development/ovn-kubernetes/test/_artifacts
Running Suite: E2e Suite
========================
Random Seed: 1640089494 - Will randomize all specs
Will run 2 of 64 specs

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
e2e ingress to host-networked pods traffic validation Validating ingress traffic to Host Networked pods 
  Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2237
[BeforeEach] e2e ingress to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Dec 21 12:24:54.069: INFO: >>> kubeConfig: /root/admin.conf
STEP: Building a namespace api object, basename nodeport-ingress-test
Dec 21 12:24:54.182: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Validating ingress traffic to Host Networked pods
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2182
STEP: Making sure that all invalid namespaces with pattern '^host-to-host-test.*|^nodeport-ingress-test.*' are deleted
Dec 21 12:24:54.189: INFO: Assuring that namespace host-to-host-test-1318 is deleted.
STEP: Waiting for namespaces [host-to-host-test-1318] to vanish
STEP: Creating the endpoints pod, one for each worker
STEP: Creating an external container to send the traffic from
[It] Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2237
STEP: Creating the nodeport service with externalTrafficPolicy=local
STEP: Waiting for the endpoints to pop up
Dec 21 12:25:18.426: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol http
Dec 21 12:25:18.980: INFO: Validated local endpoint on node ovn-control-plane with address 172.18.0.3, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol http
Dec 21 12:25:19.535: INFO: Validated local endpoint on node ovn-control-plane with address fc00:f853:ccd:e793::3, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol http
Dec 21 12:25:20.185: INFO: Validated local endpoint on node ovn-worker with address 172.18.0.4, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol http
Dec 21 12:25:20.749: INFO: Validated local endpoint on node ovn-worker with address fc00:f853:ccd:e793::4, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol http
Dec 21 12:25:21.291: INFO: Validated local endpoint on node ovn-worker2 with address 172.18.0.2, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol http
Dec 21 12:25:21.822: INFO: Validated local endpoint on node ovn-worker2 with address fc00:f853:ccd:e793::2, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol udp
Dec 21 12:25:22.403: INFO: Validated local endpoint on node ovn-control-plane with address 172.18.0.3, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol udp
Dec 21 12:25:22.970: INFO: Validated local endpoint on node ovn-control-plane with address fc00:f853:ccd:e793::3, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol udp
Dec 21 12:25:23.486: INFO: Validated local endpoint on node ovn-worker with address 172.18.0.4, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol udp
Dec 21 12:25:24.008: INFO: Validated local endpoint on node ovn-worker with address fc00:f853:ccd:e793::4, and packet src IP fc00:f853:ccd:e793::5 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol udp
Dec 21 12:25:24.560: INFO: Validated local endpoint on node ovn-worker2 with address 172.18.0.2, and packet src IP 172.18.0.5 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol udp
Dec 21 12:25:25.222: INFO: Validated local endpoint on node ovn-worker2 with address fc00:f853:ccd:e793::2, and packet src IP fc00:f853:ccd:e793::5 
[AfterEach] Validating ingress traffic to Host Networked pods
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2232
[AfterEach] e2e ingress to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:186
Dec 21 12:25:25.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nodeport-ingress-test-8081" for this suite.

• [SLOW TEST:31.507 seconds]
e2e ingress to host-networked pods traffic validation
/root/development/ovn-kubernetes/test/e2e/e2e.go:2154
  Validating ingress traffic to Host Networked pods
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2181
    Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
    /root/development/ovn-kubernetes/test/e2e/e2e.go:2237
------------------------------
SSSSSSSSSS
------------------------------
host to host-networked pods traffic validation Validating Host to Host Netwoked pods traffic 
  Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2363
[BeforeEach] host to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Dec 21 12:25:25.576: INFO: >>> kubeConfig: /root/admin.conf
STEP: Building a namespace api object, basename host-to-host-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Validating Host to Host Netwoked pods traffic
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2318
STEP: Making sure that all invalid namespaces with pattern '^host-to-host-test.*|^nodeport-ingress-test.*' are deleted
Dec 21 12:25:25.605: INFO: Assuring that namespace nodeport-ingress-test-8081 is deleted.
STEP: Waiting for namespaces [nodeport-ingress-test-8081] to vanish
STEP: Creating the endpoints pod, one for each worker
[It] Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2363
STEP: Creating the nodeport service with externalTrafficPolicy=local
STEP: Waiting for the endpoints to pop up
Dec 21 12:27:08.968: INFO: Waiting for amount of service:nodeportsvclocalhostnet endpoints to be 3
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol http
Dec 21 12:27:09.183: INFO: Validated local endpoint on node ovn-control-plane with address 172.18.0.3, and packet src IP 172.18.0.3:60736 
STEP: Hitting the nodeport on ovn-control-plane and trying to reach only the local endpoint with protocol http
Dec 21 12:27:09.399: INFO: Validated local endpoint on node ovn-control-plane with address fc00:f853:ccd:e793::3, and packet src IP [fc00:f853:ccd:e793::3]:39738 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol http
Dec 21 12:27:09.594: INFO: Validated local endpoint on node ovn-worker with address 172.18.0.4, and packet src IP 172.18.0.4:58690 
STEP: Hitting the nodeport on ovn-worker and trying to reach only the local endpoint with protocol http
Dec 21 12:27:09.776: INFO: Validated local endpoint on node ovn-worker with address fc00:f853:ccd:e793::4, and packet src IP [fc00:f853:ccd:e793::4]:52270 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol http
Dec 21 12:27:09.969: INFO: Validated local endpoint on node ovn-worker2 with address 172.18.0.2, and packet src IP 172.18.0.2:54274 
STEP: Hitting the nodeport on ovn-worker2 and trying to reach only the local endpoint with protocol http
Dec 21 12:27:10.155: INFO: Validated local endpoint on node ovn-worker2 with address fc00:f853:ccd:e793::2, and packet src IP [fc00:f853:ccd:e793::2]:41458 
[AfterEach] host to host-networked pods traffic validation
  /root/go/pkg/mod/k8s.io/kubernetes@v1.22.2/test/e2e/framework/framework.go:186
Dec 21 12:27:10.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "host-to-host-test-1318" for this suite.

• [SLOW TEST:104.586 seconds]
host to host-networked pods traffic validation
/root/development/ovn-kubernetes/test/e2e/e2e.go:2295
  Validating Host to Host Netwoked pods traffic
  /root/development/ovn-kubernetes/test/e2e/e2e.go:2317
    Should be allowed to node local host-networked endpoints by nodeport services with externalTrafficPolicy=local
    /root/development/ovn-kubernetes/test/e2e/e2e.go:2363
------------------------------
SSSSS
JUnit report was created: /root/development/ovn-kubernetes/test/_artifacts/junit_control-plane_01.xml

Ran 2 of 64 Specs in 136.094 seconds
SUCCESS! -- 2 Passed | 0 Failed | 0 Flaked | 0 Pending | 62 Skipped
--- PASS: TestE2e (136.11s)
PASS
ok      github.com/ovn-org/ovn-kubernetes/test/e2e  136.133s
+ popd
~/development/ovn-kubernetes/test
make: Leaving directory '/root/development/ovn-kubernetes/test'