cilium / cilium

eBPF-based Networking, Security, and Observability
https://cilium.io
Apache License 2.0
19.85k stars 2.91k forks source link

CI: K8sDatapathConfig Host firewall With VXLAN #22578

Closed maintainer-s-little-helper[bot] closed 1 year ago

maintainer-s-little-helper[bot] commented 1 year ago

Test Name

K8sDatapathConfig Host firewall With VXLAN

Failure Output

FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated:

Stacktrace

Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: level=error /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```

Standard Output

Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "level=error" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-f9xfj cilium-vt8gd] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress coredns-8cfc78c54-8l672 false false testclient-75wgd false false testclient-jb9w8 false false testserver-4km8w false false testserver-c7lwn false false grafana-7fd557d749-pgrhf false false prometheus-d87f8f984-k9hz4 false false Cilium agent 'cilium-f9xfj': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 Cilium agent 'cilium-vt8gd': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 43 Failed 0 ```

Standard Error

Click to show. ```stack-error 12:07:26 STEP: Installing Cilium 12:07:28 STEP: Waiting for Cilium to become ready 12:08:32 STEP: Validating if Kubernetes DNS is deployed 12:08:32 STEP: Checking if deployment is ready 12:08:32 STEP: Checking if kube-dns service is plumbed correctly 12:08:32 STEP: Checking if pods have identity 12:08:32 STEP: Checking if DNS can resolve 12:08:32 STEP: Kubernetes DNS is up and operational 12:08:32 STEP: Validating Cilium Installation 12:08:32 STEP: Performing Cilium controllers preflight check 12:08:32 STEP: Performing Cilium health check 12:08:32 STEP: Performing Cilium status preflight check 12:08:32 STEP: Checking whether host EP regenerated 12:08:33 STEP: Performing Cilium service preflight check 12:08:33 STEP: Performing K8s service preflight check 12:08:34 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-vt8gd': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 12:08:34 STEP: Performing Cilium controllers preflight check 12:08:34 STEP: Performing Cilium status preflight check 12:08:34 STEP: Checking whether host EP regenerated 12:08:34 STEP: Performing Cilium health check 12:08:34 STEP: Performing Cilium service preflight check 12:08:34 STEP: Performing K8s service preflight check 12:08:35 STEP: Performing Cilium status preflight check 12:08:35 STEP: Performing Cilium health check 12:08:35 STEP: Performing Cilium controllers preflight check 12:08:35 STEP: Checking whether host EP regenerated 12:08:36 STEP: Performing Cilium service preflight check 12:08:36 STEP: Performing K8s service preflight check 12:08:36 STEP: Performing Cilium status preflight check 12:08:36 STEP: Performing Cilium controllers preflight check 12:08:36 STEP: Checking whether host EP regenerated 12:08:36 STEP: Performing Cilium health check 12:08:37 STEP: Performing Cilium service preflight check 12:08:37 STEP: Performing K8s service preflight check 12:08:38 STEP: Performing Cilium status preflight check 12:08:38 STEP: Performing Cilium controllers preflight check 12:08:38 STEP: Performing Cilium health check 12:08:38 STEP: Checking whether host EP regenerated 12:08:39 STEP: Performing Cilium service preflight check 12:08:39 STEP: Performing K8s service preflight check 12:08:39 STEP: Performing Cilium controllers preflight check 12:08:39 STEP: Performing Cilium status preflight check 12:08:39 STEP: Performing Cilium health check 12:08:39 STEP: Checking whether host EP regenerated 12:08:40 STEP: Performing Cilium service preflight check 12:08:40 STEP: Performing K8s service preflight check 12:08:40 STEP: Performing Cilium controllers preflight check 12:08:40 STEP: Performing Cilium status preflight check 12:08:40 STEP: Performing Cilium health check 12:08:40 STEP: Checking whether host EP regenerated 12:08:41 STEP: Performing Cilium service preflight check 12:08:41 STEP: Performing K8s service preflight check 12:08:42 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-vt8gd': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 12:08:42 STEP: Performing Cilium health check 12:08:42 STEP: Performing Cilium controllers preflight check 12:08:42 STEP: Checking whether host EP regenerated 12:08:42 STEP: Performing Cilium status preflight check 12:08:43 STEP: Performing Cilium service preflight check 12:08:43 STEP: Performing K8s service preflight check 12:08:43 STEP: Performing Cilium controllers preflight check 12:08:43 STEP: Performing Cilium status preflight check 12:08:43 STEP: Performing Cilium health check 12:08:43 STEP: Checking whether host EP regenerated 12:08:44 STEP: Performing Cilium service preflight check 12:08:44 STEP: Performing K8s service preflight check 12:08:45 STEP: Performing Cilium status preflight check 12:08:45 STEP: Checking whether host EP regenerated 12:08:45 STEP: Performing Cilium health check 12:08:45 STEP: Performing Cilium controllers preflight check 12:08:45 STEP: Performing Cilium service preflight check 12:08:45 STEP: Performing K8s service preflight check 12:08:47 STEP: Waiting for cilium-operator to be ready 12:08:47 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 12:08:47 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 12:08:47 STEP: Making sure all endpoints are in ready state 12:08:48 STEP: Creating namespace 202212061208k8sdatapathconfighostfirewallwithvxlan 12:08:48 STEP: Deploying demo_hostfw.yaml in namespace 202212061208k8sdatapathconfighostfirewallwithvxlan 12:08:49 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 12:08:49 STEP: WaitforNPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="") 12:09:00 STEP: WaitforNPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="") => 12:09:00 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 12:09:08 STEP: Checking host policies on egress to remote node 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:09:08 STEP: Checking host policies on ingress from remote pod 12:09:08 STEP: Checking host policies on egress to remote pod 12:09:08 STEP: Checking host policies on ingress from local pod 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:09:08 STEP: Checking host policies on egress to local pod 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:09:08 STEP: Checking host policies on ingress from remote node 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:09:08 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:09:09 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 12:09:09 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:09:09 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 12:09:09 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:09:09 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:09:09 STEP: WaitforPods(namespace="202212061208k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2022-12-06T12:09:26Z==== 12:09:26 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: level=error ===================== TEST FAILED ===================== 12:09:26 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202212061208k8sdatapathconfighostfirewallwithvxlan testclient-75wgd 1/1 Running 0 39s 10.0.1.175 k8s1 202212061208k8sdatapathconfighostfirewallwithvxlan testclient-host-8zwln 1/1 Running 0 39s 192.168.56.12 k8s2 202212061208k8sdatapathconfighostfirewallwithvxlan testclient-host-cpp29 1/1 Running 0 39s 192.168.56.11 k8s1 202212061208k8sdatapathconfighostfirewallwithvxlan testclient-jb9w8 1/1 Running 0 39s 10.0.0.143 k8s2 202212061208k8sdatapathconfighostfirewallwithvxlan testserver-4km8w 2/2 Running 0 39s 10.0.1.134 k8s1 202212061208k8sdatapathconfighostfirewallwithvxlan testserver-c7lwn 2/2 Running 0 39s 10.0.0.102 k8s2 202212061208k8sdatapathconfighostfirewallwithvxlan testserver-host-gw5jb 2/2 Running 0 39s 192.168.56.12 k8s2 202212061208k8sdatapathconfighostfirewallwithvxlan testserver-host-tlgdj 2/2 Running 0 39s 192.168.56.11 k8s1 cilium-monitoring grafana-7fd557d749-pgrhf 1/1 Running 0 33m 10.0.0.120 k8s2 cilium-monitoring prometheus-d87f8f984-k9hz4 1/1 Running 0 33m 10.0.0.15 k8s2 kube-system cilium-f9xfj 1/1 Running 0 2m 192.168.56.11 k8s1 kube-system cilium-operator-69777fd889-j8x29 1/1 Running 0 2m 192.168.56.11 k8s1 kube-system cilium-operator-69777fd889-n4cwp 1/1 Running 0 2m 192.168.56.12 k8s2 kube-system cilium-vt8gd 1/1 Running 0 2m 192.168.56.12 k8s2 kube-system coredns-8cfc78c54-8l672 1/1 Running 0 7m9s 10.0.0.26 k8s2 kube-system etcd-k8s1 1/1 Running 0 34m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 35m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 3 36m 192.168.56.11 k8s1 kube-system kube-proxy-t4srg 1/1 Running 0 37m 192.168.56.11 k8s1 kube-system kube-proxy-vlgkk 1/1 Running 0 34m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 3 36m 192.168.56.11 k8s1 kube-system log-gatherer-bdctn 1/1 Running 0 33m 192.168.56.11 k8s1 kube-system log-gatherer-jv5w6 1/1 Running 0 33m 192.168.56.12 k8s2 kube-system registry-adder-hrh86 1/1 Running 0 34m 192.168.56.11 k8s1 kube-system registry-adder-vckdq 1/1 Running 0 34m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-f9xfj cilium-vt8gd] cmd: kubectl exec -n kube-system cilium-f9xfj -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.12.90 (v1.12.90-a28e928f) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.1.219, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 3724/65535 (5.68%), Flows/s: 49.13 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2022-12-06T12:08:46Z) Stderr: cmd: kubectl exec -n kube-system cilium-f9xfj -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 747 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 768 Disabled Disabled 65061 k8s:io.cilium.k8s.policy.cluster=default fd02::18d 10.0.1.134 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202212061208k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1176 Disabled Disabled 28543 k8s:io.cilium.k8s.policy.cluster=default fd02::102 10.0.1.175 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202212061208k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2737 Disabled Disabled 4 reserved:health fd02::166 10.0.1.100 ready Stderr: cmd: kubectl exec -n kube-system cilium-vt8gd -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.12.90 (v1.12.90-a28e928f) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 43/43 healthy Proxy Status: OK, ip 10.0.0.182, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2042/65535 (3.12%), Flows/s: 19.73 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2022-12-06T12:08:47Z) Stderr: cmd: kubectl exec -n kube-system cilium-vt8gd -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 401 Disabled Disabled 4276 k8s:app=prometheus fd02::90 10.0.0.15 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 759 Disabled Disabled 37614 k8s:io.cilium.k8s.policy.cluster=default fd02::4a 10.0.0.26 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1433 Disabled Disabled 4 reserved:health fd02::14 10.0.0.246 ready 1656 Disabled Disabled 58204 k8s:app=grafana fd02::52 10.0.0.120 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 1921 Disabled Disabled 28543 k8s:io.cilium.k8s.policy.cluster=default fd02::d5 10.0.0.143 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202212061208k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2099 Disabled Disabled 65061 k8s:io.cilium.k8s.policy.cluster=default fd02::cc 10.0.0.102 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202212061208k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3342 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host Stderr: ===================== Exiting AfterFailed ===================== 12:09:37 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 12:09:37 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 12:09:37 STEP: Deleting deployment demo_hostfw.yaml 12:09:37 STEP: Deleting namespace 202212061208k8sdatapathconfighostfirewallwithvxlan 12:09:52 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|be2761d8_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```

ZIP Links:

Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//3343/artifact/be2761d8_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//3343/artifact/test_results_Cilium-PR-K8s-1.16-kernel-4.9_3343_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9/3343/

If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.

maintainer-s-little-helper[bot] commented 1 year ago

PR #22965 hit this flake with 98.61% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: level=error /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "level=error" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-7q7lc cilium-96sr4] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testserver-cv6hl false false testserver-sdv4k false false grafana-7fd557d749-h7ls6 false false prometheus-d87f8f984-x7xfk false false coredns-8cfc78c54-2sh6z false false testclient-2k2lt false false testclient-7pg85 false false Cilium agent 'cilium-7q7lc': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0 Cilium agent 'cilium-96sr4': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 ```
### Standard Error
Click to show. ```stack-error 12:32:05 STEP: Installing Cilium 12:32:07 STEP: Waiting for Cilium to become ready 12:33:07 STEP: Validating if Kubernetes DNS is deployed 12:33:07 STEP: Checking if deployment is ready 12:33:07 STEP: Checking if kube-dns service is plumbed correctly 12:33:07 STEP: Checking if pods have identity 12:33:07 STEP: Checking if DNS can resolve 12:33:12 STEP: Kubernetes DNS is not ready: 5s timeout expired 12:33:12 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 12:33:12 STEP: Waiting for Kubernetes DNS to become operational 12:33:12 STEP: Checking if deployment is ready 12:33:12 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:33:13 STEP: Checking if deployment is ready 12:33:13 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:33:13 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-96sr4: unable to find service backend 10.0.0.114:53 in datapath of cilium pod cilium-96sr4 12:33:14 STEP: Checking if deployment is ready 12:33:14 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:33:15 STEP: Checking if deployment is ready 12:33:15 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:33:16 STEP: Checking if deployment is ready 12:33:16 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:33:17 STEP: Checking if deployment is ready 12:33:17 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:33:18 STEP: Checking if deployment is ready 12:33:18 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:33:19 STEP: Checking if deployment is ready 12:33:19 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:33:20 STEP: Checking if deployment is ready 12:33:20 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:33:21 STEP: Checking if deployment is ready 12:33:21 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:33:22 STEP: Checking if deployment is ready 12:33:22 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:33:23 STEP: Checking if deployment is ready 12:33:23 STEP: Checking if kube-dns service is plumbed correctly 12:33:23 STEP: Checking if pods have identity 12:33:23 STEP: Checking if DNS can resolve 12:33:27 STEP: Validating Cilium Installation 12:33:27 STEP: Performing Cilium status preflight check 12:33:27 STEP: Performing Cilium health check 12:33:27 STEP: Checking whether host EP regenerated 12:33:27 STEP: Performing Cilium controllers preflight check 12:33:35 STEP: Performing Cilium service preflight check 12:33:35 STEP: Performing K8s service preflight check 12:33:35 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-7q7lc': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 12:33:35 STEP: Performing Cilium status preflight check 12:33:35 STEP: Performing Cilium health check 12:33:35 STEP: Performing Cilium controllers preflight check 12:33:35 STEP: Checking whether host EP regenerated 12:33:43 STEP: Performing Cilium service preflight check 12:33:43 STEP: Performing K8s service preflight check 12:33:49 STEP: Waiting for cilium-operator to be ready 12:33:49 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 12:33:49 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 12:33:49 STEP: Making sure all endpoints are in ready state 12:33:52 STEP: Creating namespace 202301091233k8sdatapathconfighostfirewallwithvxlan 12:33:52 STEP: Deploying demo_hostfw.yaml in namespace 202301091233k8sdatapathconfighostfirewallwithvxlan 12:33:52 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 12:33:52 STEP: WaitforNPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="") 12:33:59 STEP: WaitforNPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="") => 12:33:59 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 12:34:14 STEP: Checking host policies on egress to remote node 12:34:14 STEP: Checking host policies on egress to local pod 12:34:14 STEP: Checking host policies on ingress from local pod 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:34:14 STEP: Checking host policies on egress to remote pod 12:34:14 STEP: Checking host policies on ingress from remote pod 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:34:14 STEP: Checking host policies on ingress from remote node 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:34:14 STEP: WaitforPods(namespace="202301091233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => === Test Finished at 2023-01-09T12:34:19Z==== 12:34:19 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: level=error ===================== TEST FAILED ===================== 12:34:29 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202301091233k8sdatapathconfighostfirewallwithvxlan testclient-2k2lt 1/1 Running 0 42s 10.0.0.55 k8s2 202301091233k8sdatapathconfighostfirewallwithvxlan testclient-7pg85 1/1 Running 0 42s 10.0.1.104 k8s1 202301091233k8sdatapathconfighostfirewallwithvxlan testclient-host-rf8m2 1/1 Running 0 42s 192.168.56.12 k8s2 202301091233k8sdatapathconfighostfirewallwithvxlan testclient-host-zzwgs 1/1 Running 0 42s 192.168.56.11 k8s1 202301091233k8sdatapathconfighostfirewallwithvxlan testserver-cv6hl 2/2 Running 0 42s 10.0.0.46 k8s2 202301091233k8sdatapathconfighostfirewallwithvxlan testserver-host-w7vfq 2/2 Running 0 42s 192.168.56.11 k8s1 202301091233k8sdatapathconfighostfirewallwithvxlan testserver-host-wfdqz 2/2 Running 0 42s 192.168.56.12 k8s2 202301091233k8sdatapathconfighostfirewallwithvxlan testserver-sdv4k 2/2 Running 0 42s 10.0.1.93 k8s1 cilium-monitoring grafana-7fd557d749-h7ls6 1/1 Running 0 32m 10.0.0.207 k8s2 cilium-monitoring prometheus-d87f8f984-x7xfk 1/1 Running 0 32m 10.0.0.235 k8s2 kube-system cilium-7q7lc 1/1 Running 0 2m27s 192.168.56.12 k8s2 kube-system cilium-96sr4 1/1 Running 0 2m27s 192.168.56.11 k8s1 kube-system cilium-operator-67c6c56477-5phsx 1/1 Running 0 2m27s 192.168.56.12 k8s2 kube-system cilium-operator-67c6c56477-zdf25 1/1 Running 0 2m27s 192.168.56.11 k8s1 kube-system coredns-8cfc78c54-2sh6z 1/1 Running 0 82s 10.0.0.3 k8s2 kube-system etcd-k8s1 1/1 Running 0 35m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 1 35m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 3 35m 192.168.56.11 k8s1 kube-system kube-proxy-8ft6r 1/1 Running 0 36m 192.168.56.11 k8s1 kube-system kube-proxy-j26r2 1/1 Running 0 32m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 3 35m 192.168.56.11 k8s1 kube-system log-gatherer-8kwdw 1/1 Running 0 32m 192.168.56.11 k8s1 kube-system log-gatherer-dkfwr 1/1 Running 0 32m 192.168.56.12 k8s2 kube-system registry-adder-9lspk 1/1 Running 0 32m 192.168.56.12 k8s2 kube-system registry-adder-xdnz8 1/1 Running 0 32m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-7q7lc cilium-96sr4] cmd: kubectl exec -n kube-system cilium-7q7lc -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.90 (v1.13.90-27ecbabd) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 42/42 healthy Proxy Status: OK, ip 10.0.0.80, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2690/65535 (4.10%), Flows/s: 20.75 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-01-09T12:33:42Z) Stderr: cmd: kubectl exec -n kube-system cilium-7q7lc -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 144 Disabled Disabled 55897 k8s:io.cilium.k8s.policy.cluster=default fd02::9a 10.0.0.55 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301091233k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 291 Disabled Disabled 6257 k8s:io.cilium.k8s.policy.cluster=default fd02::be 10.0.0.46 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301091233k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 541 Disabled Disabled 2179 k8s:app=prometheus fd02::26 10.0.0.235 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 622 Disabled Disabled 46502 k8s:app=grafana fd02::36 10.0.0.207 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 1148 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 1157 Disabled Disabled 12211 k8s:io.cilium.k8s.policy.cluster=default fd02::4a 10.0.0.3 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 2527 Disabled Disabled 4 reserved:health fd02::cf 10.0.0.210 ready Stderr: cmd: kubectl exec -n kube-system cilium-96sr4 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.90 (v1.13.90-27ecbabd) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.1.168, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 4609/65535 (7.03%), Flows/s: 50.69 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-01-09T12:33:48Z) Stderr: cmd: kubectl exec -n kube-system cilium-96sr4 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 189 Disabled Disabled 4 reserved:health fd02::1d2 10.0.1.59 ready 299 Disabled Disabled 55897 k8s:io.cilium.k8s.policy.cluster=default fd02::12e 10.0.1.104 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301091233k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2850 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 3748 Disabled Disabled 6257 k8s:io.cilium.k8s.policy.cluster=default fd02::1c9 10.0.1.93 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301091233k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer Stderr: ===================== Exiting AfterFailed ===================== 12:34:43 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 12:34:43 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 12:34:43 STEP: Deleting deployment demo_hostfw.yaml 12:34:43 STEP: Deleting namespace 202301091233k8sdatapathconfighostfirewallwithvxlan 12:34:58 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|b8d66ff8_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//3605/artifact/b8d66ff8_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//3605/artifact/test_results_Cilium-PR-K8s-1.16-kernel-4.9_3605_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9/3605/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #18414 hit this flake with 98.61% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: level=error /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "level=error" in logs 1 times Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: Interrupt received error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Cilium pods: [cilium-lq2dr cilium-vhq2r] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-4zsf9 false false testclient-mhwxt false false testserver-fzxt8 false false testserver-smxpl false false coredns-8cfc78c54-58xtm false false Cilium agent 'cilium-lq2dr': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 Cilium agent 'cilium-vhq2r': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 32 Failed 0 ```
### Standard Error
Click to show. ```stack-error 12:01:54 STEP: Installing Cilium 12:01:57 STEP: Waiting for Cilium to become ready 12:02:54 STEP: Validating if Kubernetes DNS is deployed 12:02:54 STEP: Checking if deployment is ready 12:02:54 STEP: Checking if kube-dns service is plumbed correctly 12:02:54 STEP: Checking if pods have identity 12:02:54 STEP: Checking if DNS can resolve 12:02:59 STEP: Kubernetes DNS is not ready: 5s timeout expired 12:02:59 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 12:03:00 STEP: Waiting for Kubernetes DNS to become operational 12:03:00 STEP: Checking if deployment is ready 12:03:00 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:03:01 STEP: Checking if deployment is ready 12:03:01 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:03:02 STEP: Checking if deployment is ready 12:03:02 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:03:03 STEP: Checking if deployment is ready 12:03:03 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-vhq2r: unable to find service backend 10.0.0.65:53 in datapath of cilium pod cilium-vhq2r 12:03:03 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:03:04 STEP: Checking if deployment is ready 12:03:04 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:03:05 STEP: Checking if deployment is ready 12:03:05 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:03:06 STEP: Checking if deployment is ready 12:03:06 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:03:07 STEP: Checking if deployment is ready 12:03:07 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:03:08 STEP: Checking if deployment is ready 12:03:08 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:03:09 STEP: Checking if deployment is ready 12:03:09 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:03:10 STEP: Checking if deployment is ready 12:03:10 STEP: Checking if kube-dns service is plumbed correctly 12:03:10 STEP: Checking if pods have identity 12:03:10 STEP: Checking if DNS can resolve 12:03:14 STEP: Validating Cilium Installation 12:03:14 STEP: Performing Cilium status preflight check 12:03:14 STEP: Performing Cilium health check 12:03:14 STEP: Performing Cilium controllers preflight check 12:03:14 STEP: Checking whether host EP regenerated 12:03:21 STEP: Performing Cilium service preflight check 12:03:21 STEP: Performing K8s service preflight check 12:03:21 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-lq2dr': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 12:03:21 STEP: Performing Cilium controllers preflight check 12:03:21 STEP: Performing Cilium health check 12:03:21 STEP: Checking whether host EP regenerated 12:03:21 STEP: Performing Cilium status preflight check 12:03:29 STEP: Performing Cilium service preflight check 12:03:29 STEP: Performing K8s service preflight check 12:03:35 STEP: Waiting for cilium-operator to be ready 12:03:35 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 12:03:35 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 12:03:35 STEP: Making sure all endpoints are in ready state 12:03:38 STEP: Creating namespace 202301111203k8sdatapathconfighostfirewallwithvxlan 12:03:38 STEP: Deploying demo_hostfw.yaml in namespace 202301111203k8sdatapathconfighostfirewallwithvxlan 12:03:38 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 12:03:38 STEP: WaitforNPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="") 12:03:49 STEP: WaitforNPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="") => 12:03:49 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 12:04:04 STEP: Checking host policies on egress to remote node 12:04:04 STEP: Checking host policies on ingress from remote pod 12:04:04 STEP: Checking host policies on ingress from local pod 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:04:04 STEP: Checking host policies on egress to remote pod 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:04:04 STEP: Checking host policies on ingress from remote node 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:04:04 STEP: Checking host policies on egress to local pod 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 12:04:04 STEP: WaitforPods(namespace="202301111203k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-01-11T12:04:09Z==== 12:04:09 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: level=error ===================== TEST FAILED ===================== 12:04:20 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202301111203k8sdatapathconfighostfirewallwithvxlan testclient-4zsf9 1/1 Running 0 47s 10.0.0.103 k8s2 202301111203k8sdatapathconfighostfirewallwithvxlan testclient-host-5t24z 1/1 Running 0 47s 192.168.56.12 k8s2 202301111203k8sdatapathconfighostfirewallwithvxlan testclient-host-dr9km 1/1 Running 0 47s 192.168.56.11 k8s1 202301111203k8sdatapathconfighostfirewallwithvxlan testclient-mhwxt 1/1 Running 0 47s 10.0.1.201 k8s1 202301111203k8sdatapathconfighostfirewallwithvxlan testserver-fzxt8 2/2 Running 0 47s 10.0.0.1 k8s2 202301111203k8sdatapathconfighostfirewallwithvxlan testserver-host-lf9p9 2/2 Running 0 47s 192.168.56.11 k8s1 202301111203k8sdatapathconfighostfirewallwithvxlan testserver-host-szhvj 2/2 Running 0 47s 192.168.56.12 k8s2 202301111203k8sdatapathconfighostfirewallwithvxlan testserver-smxpl 2/2 Running 0 47s 10.0.1.18 k8s1 cilium-monitoring grafana-7fd557d749-rqrtd 0/1 Running 0 31m 10.0.0.135 k8s2 cilium-monitoring prometheus-d87f8f984-wmvkv 1/1 Running 0 31m 10.0.0.141 k8s2 kube-system cilium-lq2dr 1/1 Running 0 2m28s 192.168.56.11 k8s1 kube-system cilium-operator-84cd68447f-jn2cb 1/1 Running 0 2m28s 192.168.56.12 k8s2 kube-system cilium-operator-84cd68447f-rn96t 1/1 Running 0 2m28s 192.168.56.11 k8s1 kube-system cilium-vhq2r 1/1 Running 0 2m28s 192.168.56.12 k8s2 kube-system coredns-8cfc78c54-58xtm 1/1 Running 0 85s 10.0.0.142 k8s2 kube-system etcd-k8s1 1/1 Running 0 34m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 34m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 4 35m 192.168.56.11 k8s1 kube-system kube-proxy-nl486 1/1 Running 0 32m 192.168.56.12 k8s2 kube-system kube-proxy-qvq2q 1/1 Running 0 33m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 4 35m 192.168.56.11 k8s1 kube-system log-gatherer-cw6v7 1/1 Running 0 31m 192.168.56.11 k8s1 kube-system log-gatherer-lwvvd 1/1 Running 0 31m 192.168.56.12 k8s2 kube-system registry-adder-h4zw2 1/1 Running 0 32m 192.168.56.12 k8s2 kube-system registry-adder-sq2fh 1/1 Running 0 32m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-lq2dr cilium-vhq2r] cmd: kubectl exec -n kube-system cilium-lq2dr -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.90 (v1.13.90-f02161e5) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.1.17, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 5040/65535 (7.69%), Flows/s: 36.79 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-01-11T12:03:28Z) Stderr: cmd: kubectl exec -n kube-system cilium-lq2dr -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 112 Disabled Disabled 22138 k8s:io.cilium.k8s.policy.cluster=default fd02::12e 10.0.1.18 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301111203k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 226 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 807 Disabled Disabled 29734 k8s:io.cilium.k8s.policy.cluster=default fd02::132 10.0.1.201 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301111203k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2023 Disabled Disabled 4 reserved:health fd02::11b 10.0.1.194 ready Stderr: cmd: kubectl exec -n kube-system cilium-vhq2r -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.90 (v1.13.90-f02161e5) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 32/32 healthy Proxy Status: OK, ip 10.0.0.149, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2039/65535 (3.11%), Flows/s: 20.85 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-01-11T12:03:57Z) Stderr: cmd: kubectl exec -n kube-system cilium-vhq2r -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 82 Disabled Disabled 31190 k8s:io.cilium.k8s.policy.cluster=default fd02::64 10.0.0.142 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 732 Disabled Disabled 4 reserved:health fd02::d2 10.0.0.70 ready 1042 Disabled Disabled 22138 k8s:io.cilium.k8s.policy.cluster=default fd02::d3 10.0.0.1 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301111203k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1373 Disabled Disabled 29734 k8s:io.cilium.k8s.policy.cluster=default fd02::65 10.0.0.103 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301111203k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 3745 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host Stderr: ===================== Exiting AfterFailed ===================== 12:04:33 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 12:04:33 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 12:04:33 STEP: Deleting deployment demo_hostfw.yaml 12:04:33 STEP: Deleting namespace 202301111203k8sdatapathconfighostfirewallwithvxlan 12:04:48 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|98f62a23_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//3645/artifact/98f62a23_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//3645/artifact/test_results_Cilium-PR-K8s-1.16-kernel-4.9_3645_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9/3645/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #23276 hit this flake with 96.78% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: level=error /home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "level=error" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs Cilium pods: [cilium-5hpzf cilium-lst6t] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-gmn99 false false testclient-q6dbt false false testserver-xldqg false false testserver-zl2s5 false false grafana-b96dcb76b-95j6w false false prometheus-5c59d656f5-4n2z8 false false coredns-8c79ffd8b-ts7hh false false Cilium agent 'cilium-5hpzf': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 Cilium agent 'cilium-lst6t': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0 ```
### Standard Error
Click to show. ```stack-error 11:56:18 STEP: Installing Cilium 11:56:20 STEP: Waiting for Cilium to become ready 11:56:30 STEP: Validating if Kubernetes DNS is deployed 11:56:30 STEP: Checking if deployment is ready 11:56:30 STEP: Checking if kube-dns service is plumbed correctly 11:56:30 STEP: Checking if pods have identity 11:56:30 STEP: Checking if DNS can resolve 11:56:35 STEP: Kubernetes DNS is not ready: 5s timeout expired 11:56:35 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 11:56:35 STEP: Waiting for Kubernetes DNS to become operational 11:56:35 STEP: Checking if deployment is ready 11:56:35 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 11:56:36 STEP: Checking if deployment is ready 11:56:36 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 11:56:37 STEP: Checking if deployment is ready 11:56:37 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 11:56:38 STEP: Checking if deployment is ready 11:56:38 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 11:56:39 STEP: Checking if deployment is ready 11:56:39 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 11:56:40 STEP: Checking if deployment is ready 11:56:40 STEP: Checking if kube-dns service is plumbed correctly 11:56:40 STEP: Checking if DNS can resolve 11:56:40 STEP: Checking if pods have identity 11:56:45 STEP: Validating Cilium Installation 11:56:45 STEP: Performing Cilium controllers preflight check 11:56:45 STEP: Performing Cilium health check 11:56:45 STEP: Performing Cilium status preflight check 11:56:45 STEP: Checking whether host EP regenerated 11:56:52 STEP: Performing Cilium service preflight check 11:56:52 STEP: Performing K8s service preflight check 11:56:58 STEP: Waiting for cilium-operator to be ready 11:56:58 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 11:56:58 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 11:56:58 STEP: Making sure all endpoints are in ready state 11:57:01 STEP: Creating namespace 202301241157k8sdatapathconfighostfirewallwithvxlan 11:57:01 STEP: Deploying demo_hostfw.yaml in namespace 202301241157k8sdatapathconfighostfirewallwithvxlan 11:57:01 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 11:57:01 STEP: WaitforNPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="") 11:57:04 STEP: WaitforNPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="") => 11:57:04 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.24-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 11:57:28 STEP: Checking host policies on egress to remote node 11:57:28 STEP: Checking host policies on ingress from local pod 11:57:28 STEP: Checking host policies on egress to remote pod 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 11:57:28 STEP: Checking host policies on ingress from remote pod 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 11:57:28 STEP: Checking host policies on ingress from remote node 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 11:57:28 STEP: Checking host policies on egress to local pod 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 11:57:28 STEP: WaitforPods(namespace="202301241157k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => === Test Finished at 2023-01-24T11:57:33Z==== 11:57:33 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: level=error ===================== TEST FAILED ===================== 11:58:33 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202301241157k8sdatapathconfighostfirewallwithvxlan testclient-gmn99 1/1 Running 0 96s 10.0.1.78 k8s2 202301241157k8sdatapathconfighostfirewallwithvxlan testclient-host-xznrk 1/1 Running 0 96s 192.168.56.11 k8s1 202301241157k8sdatapathconfighostfirewallwithvxlan testclient-host-zf4gd 1/1 Running 0 96s 192.168.56.12 k8s2 202301241157k8sdatapathconfighostfirewallwithvxlan testclient-q6dbt 1/1 Running 0 96s 10.0.0.251 k8s1 202301241157k8sdatapathconfighostfirewallwithvxlan testserver-host-hwjq7 2/2 Running 0 96s 192.168.56.12 k8s2 202301241157k8sdatapathconfighostfirewallwithvxlan testserver-host-ndrvj 2/2 Running 0 96s 192.168.56.11 k8s1 202301241157k8sdatapathconfighostfirewallwithvxlan testserver-xldqg 2/2 Running 0 96s 10.0.1.126 k8s2 202301241157k8sdatapathconfighostfirewallwithvxlan testserver-zl2s5 2/2 Running 0 96s 10.0.0.202 k8s1 cilium-monitoring grafana-b96dcb76b-95j6w 1/1 Running 0 23m 10.0.0.102 k8s1 cilium-monitoring prometheus-5c59d656f5-4n2z8 1/1 Running 0 23m 10.0.0.86 k8s1 kube-system cilium-5hpzf 1/1 Running 0 2m17s 192.168.56.12 k8s2 kube-system cilium-lst6t 1/1 Running 0 2m17s 192.168.56.11 k8s1 kube-system cilium-operator-f48684dbb-8gzx7 1/1 Running 0 2m17s 192.168.56.11 k8s1 kube-system cilium-operator-f48684dbb-bhg5v 1/1 Running 1 (46s ago) 2m17s 192.168.56.12 k8s2 kube-system coredns-8c79ffd8b-ts7hh 1/1 Running 0 2m2s 10.0.0.200 k8s1 kube-system etcd-k8s1 1/1 Running 0 28m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 28m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 2 (23m ago) 28m 192.168.56.11 k8s1 kube-system kube-proxy-9xsc4 1/1 Running 0 24m 192.168.56.12 k8s2 kube-system kube-proxy-vtg6r 1/1 Running 0 27m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 2 (23m ago) 28m 192.168.56.11 k8s1 kube-system log-gatherer-9ktbd 1/1 Running 0 24m 192.168.56.11 k8s1 kube-system log-gatherer-jwpml 1/1 Running 0 24m 192.168.56.12 k8s2 kube-system registry-adder-67zbg 1/1 Running 0 24m 192.168.56.11 k8s1 kube-system registry-adder-xk2hv 1/1 Running 0 24m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-5hpzf cilium-lst6t] cmd: kubectl exec -n kube-system cilium-5hpzf -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.24 (v1.24.4) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0-rc4 (v1.13.0-rc4-0dcb39fe) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.1.177, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2050/65535 (3.13%), Flows/s: 15.84 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-01-24T11:56:51Z) Stderr: cmd: kubectl exec -n kube-system cilium-5hpzf -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 1199 Disabled Disabled 65202 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202301241157k8sdatapathconfighostfirewallwithvxlan fd02::11e 10.0.1.126 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301241157k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 2558 Disabled Disabled 20085 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202301241157k8sdatapathconfighostfirewallwithvxlan fd02::1b5 10.0.1.78 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301241157k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2651 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 2954 Disabled Disabled 4 reserved:health fd02::11d 10.0.1.49 ready Stderr: cmd: kubectl exec -n kube-system cilium-lst6t -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.24 (v1.24.4) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0-rc4 (v1.13.0-rc4-0dcb39fe) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 42/42 healthy Proxy Status: OK, ip 10.0.0.206, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 4802/65535 (7.33%), Flows/s: 37.68 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-01-24T11:56:58Z) Stderr: cmd: kubectl exec -n kube-system cilium-lst6t -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 48 Disabled Disabled 7199 k8s:app=grafana fd02::2b 10.0.0.102 ready k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 284 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/control-plane k8s:node.kubernetes.io/exclude-from-external-load-balancers k8s:status=lockdown reserved:host 329 Disabled Disabled 30752 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system fd02::37 10.0.0.200 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 407 Disabled Disabled 4 reserved:health fd02::74 10.0.0.150 ready 504 Disabled Disabled 36713 k8s:app=prometheus fd02::e2 10.0.0.86 ready k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 597 Disabled Disabled 65202 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202301241157k8sdatapathconfighostfirewallwithvxlan fd02::79 10.0.0.202 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301241157k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 2159 Disabled Disabled 20085 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202301241157k8sdatapathconfighostfirewallwithvxlan fd02::6a 10.0.0.251 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301241157k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient Stderr: ===================== Exiting AfterFailed ===================== 11:58:45 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 11:58:45 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 11:58:45 STEP: Deleting deployment demo_hostfw.yaml 11:58:45 STEP: Deleting namespace 202301241157k8sdatapathconfighostfirewallwithvxlan 11:59:01 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|b3fbf938_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-4.9//21/artifact/b3fbf938_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-4.9//21/artifact/test_results_Cilium-PR-K8s-1.24-kernel-4.9_21_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.24-kernel-4.9/21/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #23329 hit this flake with 98.61% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: level=error /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "level=error" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-v45jd cilium-v45lt] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testserver-w8pbm false false grafana-7fd557d749-52clv false false prometheus-d87f8f984-7fjmb false false coredns-8cfc78c54-kck2d false false testclient-cfwgl false false testclient-zwl6g false false testserver-ddrr9 false false Cilium agent 'cilium-v45jd': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 32 Failed 0 Cilium agent 'cilium-v45lt': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0 ```
### Standard Error
Click to show. ```stack-error 12:31:31 STEP: Installing Cilium 12:31:33 STEP: Waiting for Cilium to become ready 12:32:18 STEP: Validating if Kubernetes DNS is deployed 12:32:18 STEP: Checking if deployment is ready 12:32:19 STEP: Checking if kube-dns service is plumbed correctly 12:32:19 STEP: Checking if DNS can resolve 12:32:19 STEP: Checking if pods have identity 12:32:24 STEP: Kubernetes DNS is not ready: 5s timeout expired 12:32:24 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 12:32:24 STEP: Waiting for Kubernetes DNS to become operational 12:32:24 STEP: Checking if deployment is ready 12:32:24 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:32:25 STEP: Checking if deployment is ready 12:32:25 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:32:26 STEP: Checking if deployment is ready 12:32:26 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:32:27 STEP: Checking if deployment is ready 12:32:27 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:32:28 STEP: Checking if deployment is ready 12:32:28 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:32:29 STEP: Checking if deployment is ready 12:32:29 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:32:30 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-v45lt: unable to find service backend 10.0.1.41:53 in datapath of cilium pod cilium-v45lt 12:32:30 STEP: Checking if deployment is ready 12:32:30 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:32:31 STEP: Checking if deployment is ready 12:32:31 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:32:32 STEP: Checking if deployment is ready 12:32:32 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:32:33 STEP: Checking if deployment is ready 12:32:33 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:32:34 STEP: Checking if deployment is ready 12:32:34 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:32:35 STEP: Checking if deployment is ready 12:32:35 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:32:36 STEP: Checking if deployment is ready 12:32:36 STEP: Checking if kube-dns service is plumbed correctly 12:32:36 STEP: Checking if DNS can resolve 12:32:36 STEP: Checking if pods have identity 12:32:40 STEP: Validating Cilium Installation 12:32:40 STEP: Performing Cilium status preflight check 12:32:40 STEP: Performing Cilium controllers preflight check 12:32:40 STEP: Checking whether host EP regenerated 12:32:40 STEP: Performing Cilium health check 12:32:48 STEP: Performing Cilium service preflight check 12:32:48 STEP: Performing K8s service preflight check 12:32:48 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-v45jd': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 12:32:48 STEP: Performing Cilium controllers preflight check 12:32:48 STEP: Performing Cilium status preflight check 12:32:48 STEP: Performing Cilium health check 12:32:48 STEP: Checking whether host EP regenerated 12:32:55 STEP: Performing Cilium service preflight check 12:32:55 STEP: Performing K8s service preflight check 12:32:55 STEP: Performing Cilium controllers preflight check 12:32:55 STEP: Performing Cilium status preflight check 12:32:55 STEP: Performing Cilium health check 12:32:55 STEP: Checking whether host EP regenerated 12:33:03 STEP: Performing Cilium service preflight check 12:33:03 STEP: Performing K8s service preflight check 12:33:09 STEP: Waiting for cilium-operator to be ready 12:33:09 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 12:33:09 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 12:33:09 STEP: Making sure all endpoints are in ready state 12:33:12 STEP: Creating namespace 202301251233k8sdatapathconfighostfirewallwithvxlan 12:33:12 STEP: Deploying demo_hostfw.yaml in namespace 202301251233k8sdatapathconfighostfirewallwithvxlan 12:33:12 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 12:33:12 STEP: WaitforNPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="") 12:33:19 STEP: WaitforNPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="") => 12:33:19 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 12:33:45 STEP: Checking host policies on egress to remote pod 12:33:45 STEP: Checking host policies on egress to remote node 12:33:45 STEP: Checking host policies on ingress from remote node 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:33:45 STEP: Checking host policies on ingress from local pod 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:33:45 STEP: Checking host policies on ingress from remote pod 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:33:45 STEP: Checking host policies on egress to local pod 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:33:45 STEP: WaitforPods(namespace="202301251233k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => === Test Finished at 2023-01-25T12:33:51Z==== 12:33:51 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: level=error ===================== TEST FAILED ===================== 12:33:51 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202301251233k8sdatapathconfighostfirewallwithvxlan testclient-cfwgl 1/1 Running 0 44s 10.0.1.218 k8s2 202301251233k8sdatapathconfighostfirewallwithvxlan testclient-host-fhfv6 1/1 Running 0 44s 192.168.56.12 k8s2 202301251233k8sdatapathconfighostfirewallwithvxlan testclient-host-q26w2 1/1 Running 0 44s 192.168.56.11 k8s1 202301251233k8sdatapathconfighostfirewallwithvxlan testclient-zwl6g 1/1 Running 0 44s 10.0.0.23 k8s1 202301251233k8sdatapathconfighostfirewallwithvxlan testserver-ddrr9 2/2 Running 0 44s 10.0.1.107 k8s2 202301251233k8sdatapathconfighostfirewallwithvxlan testserver-host-7ttkz 2/2 Running 0 44s 192.168.56.11 k8s1 202301251233k8sdatapathconfighostfirewallwithvxlan testserver-host-xshgl 2/2 Running 0 44s 192.168.56.12 k8s2 202301251233k8sdatapathconfighostfirewallwithvxlan testserver-w8pbm 2/2 Running 0 44s 10.0.0.161 k8s1 cilium-monitoring grafana-7fd557d749-52clv 1/1 Running 0 27m 10.0.0.237 k8s1 cilium-monitoring prometheus-d87f8f984-7fjmb 1/1 Running 0 27m 10.0.0.176 k8s1 kube-system cilium-operator-7cb9c87448-6nvjc 1/1 Running 0 2m22s 192.168.56.12 k8s2 kube-system cilium-operator-7cb9c87448-jkhbt 1/1 Running 0 2m22s 192.168.56.11 k8s1 kube-system cilium-v45jd 1/1 Running 0 2m23s 192.168.56.12 k8s2 kube-system cilium-v45lt 1/1 Running 0 2m23s 192.168.56.11 k8s1 kube-system coredns-8cfc78c54-kck2d 1/1 Running 0 92s 10.0.1.99 k8s2 kube-system etcd-k8s1 1/1 Running 0 31m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 31m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 4 32m 192.168.56.11 k8s1 kube-system kube-proxy-hwzkq 1/1 Running 0 30m 192.168.56.11 k8s1 kube-system kube-proxy-vsvdt 1/1 Running 0 28m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 4 32m 192.168.56.11 k8s1 kube-system log-gatherer-lwp6s 1/1 Running 0 27m 192.168.56.12 k8s2 kube-system log-gatherer-zbtpb 1/1 Running 0 27m 192.168.56.11 k8s1 kube-system registry-adder-d2rcg 1/1 Running 0 28m 192.168.56.11 k8s1 kube-system registry-adder-xlpgb 1/1 Running 0 28m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-v45jd cilium-v45lt] cmd: kubectl exec -n kube-system cilium-v45jd -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.90 (v1.13.90-3012f996) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 32/32 healthy Proxy Status: OK, ip 10.0.1.97, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 1991/65535 (3.04%), Flows/s: 15.85 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-01-25T12:33:54Z) Stderr: cmd: kubectl exec -n kube-system cilium-v45jd -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 642 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 1778 Disabled Disabled 4 reserved:health fd02::111 10.0.1.208 ready 3181 Disabled Disabled 18240 k8s:io.cilium.k8s.policy.cluster=default fd02::102 10.0.1.99 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 3585 Disabled Disabled 31551 k8s:io.cilium.k8s.policy.cluster=default fd02::146 10.0.1.218 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301251233k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 3962 Disabled Disabled 3840 k8s:io.cilium.k8s.policy.cluster=default fd02::1b4 10.0.1.107 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301251233k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer Stderr: cmd: kubectl exec -n kube-system cilium-v45lt -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.90 (v1.13.90-3012f996) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 6/254 allocated from 10.0.0.0/24, IPv6: 6/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 38/38 healthy Proxy Status: OK, ip 10.0.0.172, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 5928/65535 (9.05%), Flows/s: 58.75 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-01-25T12:33:09Z) Stderr: cmd: kubectl exec -n kube-system cilium-v45lt -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 39 Disabled Disabled 48331 k8s:app=grafana fd02::d8 10.0.0.237 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 950 Disabled Disabled 4 reserved:health fd02::6f 10.0.0.201 ready 1837 Disabled Disabled 34802 k8s:app=prometheus fd02::6e 10.0.0.176 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 1927 Disabled Disabled 3840 k8s:io.cilium.k8s.policy.cluster=default fd02::21 10.0.0.161 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301251233k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3286 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 3413 Disabled Disabled 31551 k8s:io.cilium.k8s.policy.cluster=default fd02::2f 10.0.0.23 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301251233k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient Stderr: ===================== Exiting AfterFailed ===================== 12:34:04 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 12:34:04 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 12:34:04 STEP: Deleting deployment demo_hostfw.yaml 12:34:04 STEP: Deleting namespace 202301251233k8sdatapathconfighostfirewallwithvxlan 12:34:20 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|6c4a2b64_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//3839/artifact/6c4a2b64_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//3839/artifact/test_results_Cilium-PR-K8s-1.16-kernel-4.9_3839_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9/3839/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #23467 hit this flake with 97.53% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-01-31T09:04:37.961951330Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-01-31T09:04:37.961951330Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-269kp cilium-jnn45] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-gkh8x false false testclient-n2p4k false false testserver-9cb6t false false testserver-p7lzq false false grafana-7fd557d749-wfkph false false prometheus-d87f8f984-mtkl6 false false coredns-8cfc78c54-ctr2m false false Cilium agent 'cilium-269kp': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0 Cilium agent 'cilium-jnn45': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0 ```
### Standard Error
Click to show. ```stack-error 09:02:28 STEP: Installing Cilium 09:02:30 STEP: Waiting for Cilium to become ready 09:03:10 STEP: Validating if Kubernetes DNS is deployed 09:03:10 STEP: Checking if deployment is ready 09:03:10 STEP: Checking if kube-dns service is plumbed correctly 09:03:10 STEP: Checking if pods have identity 09:03:10 STEP: Checking if DNS can resolve 09:03:14 STEP: Kubernetes DNS is up and operational 09:03:14 STEP: Validating Cilium Installation 09:03:14 STEP: Performing Cilium controllers preflight check 09:03:14 STEP: Performing Cilium status preflight check 09:03:14 STEP: Performing Cilium health check 09:03:14 STEP: Checking whether host EP regenerated 09:03:21 STEP: Performing Cilium service preflight check 09:03:21 STEP: Performing K8s service preflight check 09:03:22 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-jnn45': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 09:03:22 STEP: Performing Cilium controllers preflight check 09:03:22 STEP: Performing Cilium health check 09:03:22 STEP: Checking whether host EP regenerated 09:03:22 STEP: Performing Cilium status preflight check 09:03:30 STEP: Performing Cilium service preflight check 09:03:30 STEP: Performing K8s service preflight check 09:03:31 STEP: Performing Cilium status preflight check 09:03:31 STEP: Performing Cilium health check 09:03:31 STEP: Checking whether host EP regenerated 09:03:31 STEP: Performing Cilium controllers preflight check 09:03:38 STEP: Performing Cilium service preflight check 09:03:38 STEP: Performing K8s service preflight check 09:03:39 STEP: Performing Cilium controllers preflight check 09:03:39 STEP: Performing Cilium health check 09:03:39 STEP: Performing Cilium status preflight check 09:03:39 STEP: Checking whether host EP regenerated 09:03:47 STEP: Performing Cilium service preflight check 09:03:47 STEP: Performing K8s service preflight check 09:03:53 STEP: Waiting for cilium-operator to be ready 09:03:53 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 09:03:53 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 09:03:53 STEP: Making sure all endpoints are in ready state 09:03:56 STEP: Creating namespace 202301310903k8sdatapathconfighostfirewallwithvxlan 09:03:56 STEP: Deploying demo_hostfw.yaml in namespace 202301310903k8sdatapathconfighostfirewallwithvxlan 09:03:56 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 09:03:56 STEP: WaitforNPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="") 09:04:08 STEP: WaitforNPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="") => 09:04:08 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 09:04:24 STEP: Checking host policies on egress to remote node 09:04:24 STEP: Checking host policies on egress to local pod 09:04:24 STEP: Checking host policies on egress to remote pod 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 09:04:24 STEP: Checking host policies on ingress from remote node 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 09:04:24 STEP: Checking host policies on ingress from local pod 09:04:24 STEP: Checking host policies on ingress from remote pod 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 09:04:24 STEP: WaitforPods(namespace="202301310903k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-01-31T09:04:30Z==== 09:04:30 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-01-31T09:04:37.961951330Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 09:04:39 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202301310903k8sdatapathconfighostfirewallwithvxlan testclient-gkh8x 1/1 Running 0 48s 10.0.0.95 k8s1 202301310903k8sdatapathconfighostfirewallwithvxlan testclient-host-5dt6f 1/1 Running 0 48s 192.168.56.11 k8s1 202301310903k8sdatapathconfighostfirewallwithvxlan testclient-host-qx2jn 1/1 Running 0 48s 192.168.56.12 k8s2 202301310903k8sdatapathconfighostfirewallwithvxlan testclient-n2p4k 1/1 Running 0 48s 10.0.1.231 k8s2 202301310903k8sdatapathconfighostfirewallwithvxlan testserver-9cb6t 2/2 Running 0 48s 10.0.0.176 k8s1 202301310903k8sdatapathconfighostfirewallwithvxlan testserver-host-qsf6w 2/2 Running 0 48s 192.168.56.11 k8s1 202301310903k8sdatapathconfighostfirewallwithvxlan testserver-host-s455x 2/2 Running 0 48s 192.168.56.12 k8s2 202301310903k8sdatapathconfighostfirewallwithvxlan testserver-p7lzq 2/2 Running 0 48s 10.0.1.81 k8s2 cilium-monitoring grafana-7fd557d749-wfkph 1/1 Running 0 33m 10.0.0.164 k8s1 cilium-monitoring prometheus-d87f8f984-mtkl6 1/1 Running 0 33m 10.0.0.34 k8s1 kube-system cilium-269kp 1/1 Running 0 2m14s 192.168.56.12 k8s2 kube-system cilium-jnn45 1/1 Running 0 2m14s 192.168.56.11 k8s1 kube-system cilium-operator-55bd9fb85b-7m6dz 1/1 Running 0 2m14s 192.168.56.12 k8s2 kube-system cilium-operator-55bd9fb85b-mjfrb 1/1 Running 0 2m14s 192.168.56.11 k8s1 kube-system coredns-8cfc78c54-ctr2m 1/1 Running 0 6m22s 10.0.1.16 k8s2 kube-system etcd-k8s1 1/1 Running 0 36m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 36m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 3 37m 192.168.56.11 k8s1 kube-system kube-proxy-c55qf 1/1 Running 0 35m 192.168.56.11 k8s1 kube-system kube-proxy-h6zgz 1/1 Running 0 33m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 3 37m 192.168.56.11 k8s1 kube-system log-gatherer-6hkfz 1/1 Running 0 33m 192.168.56.11 k8s1 kube-system log-gatherer-ghpjq 1/1 Running 0 33m 192.168.56.12 k8s2 kube-system registry-adder-dx58q 1/1 Running 0 33m 192.168.56.11 k8s1 kube-system registry-adder-nxzr6 1/1 Running 0 33m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-269kp cilium-jnn45] cmd: kubectl exec -n kube-system cilium-269kp -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.90 (v1.13.90-8057002a) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 33/33 healthy Proxy Status: OK, ip 10.0.1.76, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2487/65535 (3.79%), Flows/s: 20.18 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-01-31T09:03:46Z) Stderr: cmd: kubectl exec -n kube-system cilium-269kp -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 262 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 314 Disabled Disabled 4 reserved:health fd02::1f4 10.0.1.227 ready 3524 Disabled Disabled 2244 k8s:io.cilium.k8s.policy.cluster=default fd02::15c 10.0.1.81 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301310903k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3922 Disabled Disabled 28465 k8s:io.cilium.k8s.policy.cluster=default fd02::1da 10.0.1.16 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 4057 Disabled Disabled 3398 k8s:io.cilium.k8s.policy.cluster=default fd02::1a9 10.0.1.231 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301310903k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient Stderr: cmd: kubectl exec -n kube-system cilium-jnn45 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.90 (v1.13.90-8057002a) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 6/254 allocated from 10.0.0.0/24, IPv6: 6/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 38/38 healthy Proxy Status: OK, ip 10.0.0.19, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 5893/65535 (8.99%), Flows/s: 48.46 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-01-31T09:03:53Z) Stderr: cmd: kubectl exec -n kube-system cilium-jnn45 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 280 Disabled Disabled 3398 k8s:io.cilium.k8s.policy.cluster=default fd02::16 10.0.0.95 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301310903k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 1133 Disabled Disabled 2244 k8s:io.cilium.k8s.policy.cluster=default fd02::59 10.0.0.176 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202301310903k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1140 Disabled Disabled 39756 k8s:app=grafana fd02::a9 10.0.0.164 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 1694 Disabled Disabled 29719 k8s:app=prometheus fd02::44 10.0.0.34 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 2174 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 3172 Disabled Disabled 4 reserved:health fd02::fb 10.0.0.186 ready Stderr: ===================== Exiting AfterFailed ===================== 09:04:52 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 09:04:52 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 09:04:52 STEP: Deleting deployment demo_hostfw.yaml 09:04:52 STEP: Deleting namespace 202301310903k8sdatapathconfighostfirewallwithvxlan 09:05:08 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|599face1_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//3928/artifact/599face1_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//3928/artifact/test_results_Cilium-PR-K8s-1.16-kernel-4.9_3928_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9/3928/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #23612 hit this flake with 94.79% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:427 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-02-08T13:18:48.350938539Z level=error msg="Failed to update lock: Put \"https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock\": context deadline exceeded" subsys=klog /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:425 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-02-08T13:18:48.350938539Z level=error msg=\"Failed to update lock: Put \\\"https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock\\\": context deadline exceeded\" subsys=klog" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 1 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Failed to update lock: Put \ Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 0 ⚠️ Number of "level=warning" in logs: 8 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: Unable to restore endpoint, ignoring Key allocation attempt failed Cilium pods: [cilium-5d7tq cilium-jc558] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-gxtp4 false false testclient-ww6jn false false testserver-nqhgt false false testserver-vfpfg false false grafana-5747bcc8f9-zm8gj false false prometheus-655fb888d7-5gxk7 false false coredns-69b675786c-m8wlm false false Cilium agent 'cilium-5d7tq': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0 Cilium agent 'cilium-jc558': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 ```
### Standard Error
Click to show. ```stack-error 13:16:52 STEP: Installing Cilium 13:16:54 STEP: Waiting for Cilium to become ready 13:17:47 STEP: Validating if Kubernetes DNS is deployed 13:17:47 STEP: Checking if deployment is ready 13:17:47 STEP: Checking if kube-dns service is plumbed correctly 13:17:47 STEP: Checking if DNS can resolve 13:17:47 STEP: Checking if pods have identity 13:17:52 STEP: Kubernetes DNS is not ready: 5s timeout expired 13:17:52 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 13:17:52 STEP: Waiting for Kubernetes DNS to become operational 13:17:52 STEP: Checking if deployment is ready 13:17:52 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 13:17:53 STEP: Checking if deployment is ready 13:17:53 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 13:17:54 STEP: Checking if deployment is ready 13:17:54 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 13:17:55 STEP: Checking if deployment is ready 13:17:55 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 13:17:56 STEP: Checking if deployment is ready 13:17:56 STEP: Checking if kube-dns service is plumbed correctly 13:17:56 STEP: Checking if pods have identity 13:17:56 STEP: Checking if DNS can resolve 13:18:00 STEP: Validating Cilium Installation 13:18:00 STEP: Performing Cilium controllers preflight check 13:18:00 STEP: Performing Cilium status preflight check 13:18:00 STEP: Performing Cilium health check 13:18:00 STEP: Checking whether host EP regenerated 13:18:07 STEP: Performing Cilium service preflight check 13:18:13 STEP: Cilium is not ready yet: cilium services are not set up correctly: Error validating Cilium service on pod {cilium-5d7tq [{0xc0000c1780 0xc0007b0080} {0xc0000c1940 0xc0007b0090} {0xc0000c1c00 0xc0007b00a0} {0xc0000c1d40 0xc0007b00b0} {0xc0004b20c0 0xc0007b00d8} {0xc0004b2240 0xc0007b00f0}] map[10.104.6.12:9090:[0.0.0.0:0 (1) (0) [ClusterIP, non-routable] 10.0.1.59:9090 (1) (1)] 10.105.33.6:443:[192.168.56.12:4244 (7) (1) 192.168.56.11:4244 (7) (2) 0.0.0.0:0 (7) (0) [ClusterIP, non-routable]] 10.108.19.201:3000:[10.0.1.100:3000 (6) (1) 0.0.0.0:0 (6) (0) [ClusterIP, non-routable]] 10.96.0.10:53:[0.0.0.0:0 (5) (0) [ClusterIP, non-routable] 10.0.0.90:53 (5) (2) 10.0.1.116:53 (5) (1)] 10.96.0.10:9153:[10.0.0.90:9153 (4) (2) 10.0.1.116:9153 (4) (1) 0.0.0.0:0 (4) (0) [ClusterIP, non-routable]] 10.96.0.1:443:[192.168.56.11:6443 (3) (1) 0.0.0.0:0 (3) (0) [ClusterIP, non-routable]]]}: Could not match cilium service backend address 10.0.0.90:9153 with k8s endpoint 13:18:13 STEP: Performing Cilium status preflight check 13:18:13 STEP: Performing Cilium health check 13:18:13 STEP: Checking whether host EP regenerated 13:18:13 STEP: Performing Cilium controllers preflight check 13:18:21 STEP: Performing Cilium service preflight check 13:18:21 STEP: Performing K8s service preflight check 13:18:27 STEP: Waiting for cilium-operator to be ready 13:18:27 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 13:18:27 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 13:18:27 STEP: Making sure all endpoints are in ready state 13:18:30 STEP: Creating namespace 202302081318k8sdatapathconfighostfirewallwithvxlan 13:18:30 STEP: Deploying demo_hostfw.yaml in namespace 202302081318k8sdatapathconfighostfirewallwithvxlan 13:18:30 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 13:18:30 STEP: WaitforNPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="") 13:18:34 STEP: WaitforNPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="") => 13:18:34 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 13:18:47 STEP: Checking host policies on ingress from local pod 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 13:18:47 STEP: Checking host policies on egress to local pod 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 13:18:47 STEP: Checking host policies on egress to remote node 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 13:18:47 STEP: Checking host policies on egress to remote pod 13:18:47 STEP: Checking host policies on ingress from remote pod 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 13:18:47 STEP: Checking host policies on ingress from remote node 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 13:18:47 STEP: WaitforPods(namespace="202302081318k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => === Test Finished at 2023-02-08T13:18:53Z==== 13:18:53 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-02-08T13:18:48.350938539Z level=error msg="Failed to update lock: Put \"https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock\": context deadline exceeded" subsys=klog ===================== TEST FAILED ===================== 13:18:53 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202302081318k8sdatapathconfighostfirewallwithvxlan testclient-gxtp4 1/1 Running 0 28s 10.0.0.139 k8s1 202302081318k8sdatapathconfighostfirewallwithvxlan testclient-host-f76sq 1/1 Running 0 28s 192.168.56.11 k8s1 202302081318k8sdatapathconfighostfirewallwithvxlan testclient-host-lggzw 1/1 Running 0 28s 192.168.56.12 k8s2 202302081318k8sdatapathconfighostfirewallwithvxlan testclient-ww6jn 1/1 Running 0 28s 10.0.1.70 k8s2 202302081318k8sdatapathconfighostfirewallwithvxlan testserver-host-m2kh5 2/2 Running 0 28s 192.168.56.12 k8s2 202302081318k8sdatapathconfighostfirewallwithvxlan testserver-host-qcf4m 2/2 Running 0 28s 192.168.56.11 k8s1 202302081318k8sdatapathconfighostfirewallwithvxlan testserver-nqhgt 2/2 Running 0 28s 10.0.0.109 k8s1 202302081318k8sdatapathconfighostfirewallwithvxlan testserver-vfpfg 2/2 Running 0 28s 10.0.1.254 k8s2 cilium-monitoring grafana-5747bcc8f9-zm8gj 1/1 Running 0 15m 10.0.1.100 k8s2 cilium-monitoring prometheus-655fb888d7-5gxk7 1/1 Running 0 15m 10.0.1.59 k8s2 kube-system cilium-5d7tq 1/1 Running 0 2m4s 192.168.56.12 k8s2 kube-system cilium-jc558 1/1 Running 0 2m4s 192.168.56.11 k8s1 kube-system cilium-operator-5648d49877-58vls 1/1 Running 0 2m4s 192.168.56.12 k8s2 kube-system cilium-operator-5648d49877-6bfcb 1/1 Running 0 2m4s 192.168.56.11 k8s1 kube-system coredns-69b675786c-m8wlm 1/1 Running 0 66s 10.0.1.116 k8s2 kube-system etcd-k8s1 1/1 Running 0 20m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 20m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 2 20m 192.168.56.11 k8s1 kube-system kube-proxy-bkmbv 1/1 Running 0 19m 192.168.56.11 k8s1 kube-system kube-proxy-gg8wz 1/1 Running 0 16m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 2 20m 192.168.56.11 k8s1 kube-system log-gatherer-t8btq 1/1 Running 0 15m 192.168.56.12 k8s2 kube-system log-gatherer-w9qtz 1/1 Running 0 15m 192.168.56.11 k8s1 kube-system registry-adder-8cfbb 1/1 Running 0 16m 192.168.56.11 k8s1 kube-system registry-adder-fxv5q 1/1 Running 0 16m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-5d7tq cilium-jc558] cmd: kubectl exec -n kube-system cilium-5d7tq -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.21 (v1.21.14) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none Cilium: Ok 1.12.6 (v1.12.6-15d0244) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 7/254 allocated from 10.0.1.0/24, IPv6: 7/254 allocated from fd02::100/120 BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 42/42 healthy Proxy Status: OK, ip 10.0.1.148, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 1885/65535 (2.88%), Flows/s: 17.23 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-02-08T13:18:20Z) Stderr: cmd: kubectl exec -n kube-system cilium-5d7tq -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 153 Disabled Disabled 44579 k8s:app=grafana fd02::19e 10.0.1.100 ready k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 260 Disabled Disabled 4 reserved:health fd02::1a6 10.0.1.93 ready 271 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 696 Disabled Disabled 55573 k8s:app=prometheus fd02::1b4 10.0.1.59 ready k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 1505 Disabled Disabled 29349 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system fd02::1fb 10.0.1.116 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 2007 Disabled Disabled 57607 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202302081318k8sdatapathconfighostfirewallwithvxlan fd02::1f9 10.0.1.70 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302081318k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2870 Disabled Disabled 15342 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202302081318k8sdatapathconfighostfirewallwithvxlan fd02::18e 10.0.1.254 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302081318k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer Stderr: cmd: kubectl exec -n kube-system cilium-jc558 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.21 (v1.21.14) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none Cilium: Ok 1.12.6 (v1.12.6-15d0244) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120 BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.0.129, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 4017/65535 (6.13%), Flows/s: 56.59 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-02-08T13:18:27Z) Stderr: cmd: kubectl exec -n kube-system cilium-jc558 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 720 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/control-plane k8s:node-role.kubernetes.io/master k8s:node.kubernetes.io/exclude-from-external-load-balancers k8s:status=lockdown reserved:host 1899 Disabled Disabled 57607 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202302081318k8sdatapathconfighostfirewallwithvxlan fd02::4 10.0.0.139 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302081318k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2712 Disabled Disabled 15342 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202302081318k8sdatapathconfighostfirewallwithvxlan fd02::73 10.0.0.109 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302081318k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3783 Disabled Disabled 4 reserved:health fd02::39 10.0.0.118 ready Stderr: ===================== Exiting AfterFailed ===================== 13:19:23 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 13:19:23 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 13:19:23 STEP: Deleting deployment demo_hostfw.yaml 13:19:23 STEP: Deleting namespace 202302081318k8sdatapathconfighostfirewallwithvxlan 13:19:39 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|c56060cd_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9//2378/artifact/19d1f02d_K8sIstioTest_Istio_Bookinfo_Demo_Tests_bookinfo_inter-service_connectivity.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9//2378/artifact/c56060cd_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9//2378/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.9_2378_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9/2378/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #23636 hit this flake with 97.53% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-02-08T19:52:13.617386247Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-02-08T19:52:13.617386247Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Cilium pods: [cilium-n4dj5 cilium-sxjht] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-6jj9x false false testclient-psclp false false testserver-gk2f9 false false testserver-m882z false false coredns-8cfc78c54-jbrt6 false false Cilium agent 'cilium-n4dj5': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 Cilium agent 'cilium-sxjht': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0 ```
### Standard Error
Click to show. ```stack-error 19:49:17 STEP: Installing Cilium 19:49:19 STEP: Waiting for Cilium to become ready 19:51:14 STEP: Validating if Kubernetes DNS is deployed 19:51:14 STEP: Checking if deployment is ready 19:51:14 STEP: Checking if kube-dns service is plumbed correctly 19:51:14 STEP: Checking if DNS can resolve 19:51:14 STEP: Checking if pods have identity 19:51:18 STEP: Kubernetes DNS is up and operational 19:51:18 STEP: Validating Cilium Installation 19:51:18 STEP: Performing Cilium controllers preflight check 19:51:18 STEP: Performing Cilium status preflight check 19:51:18 STEP: Performing Cilium health check 19:51:18 STEP: Checking whether host EP regenerated 19:51:25 STEP: Performing Cilium service preflight check 19:51:25 STEP: Performing K8s service preflight check 19:51:32 STEP: Waiting for cilium-operator to be ready 19:51:32 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 19:51:32 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 19:51:32 STEP: Making sure all endpoints are in ready state 19:51:34 STEP: Creating namespace 202302081951k8sdatapathconfighostfirewallwithvxlan 19:51:34 STEP: Deploying demo_hostfw.yaml in namespace 202302081951k8sdatapathconfighostfirewallwithvxlan 19:51:35 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 19:51:35 STEP: WaitforNPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="") 19:51:47 STEP: WaitforNPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="") => 19:51:47 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 19:52:03 STEP: Checking host policies on egress to remote node 19:52:03 STEP: Checking host policies on ingress from local pod 19:52:03 STEP: Checking host policies on ingress from remote node 19:52:03 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 19:52:03 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 19:52:03 STEP: Checking host policies on ingress from remote pod 19:52:03 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 19:52:03 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 19:52:03 STEP: Checking host policies on egress to remote pod 19:52:03 STEP: Checking host policies on egress to local pod 19:52:03 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 19:52:03 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 19:52:03 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 19:52:03 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 19:52:03 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 19:52:03 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 19:52:03 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 19:52:03 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 19:52:04 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 19:52:04 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 19:52:04 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 19:52:04 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 19:52:04 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 19:52:04 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 19:52:04 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 19:52:04 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 19:52:04 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 19:52:04 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 19:52:04 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 19:52:04 STEP: WaitforPods(namespace="202302081951k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-02-08T19:52:09Z==== 19:52:09 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-02-08T19:52:13.617386247Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 19:52:17 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202302081951k8sdatapathconfighostfirewallwithvxlan testclient-6jj9x 1/1 Running 0 47s 10.0.0.68 k8s1 202302081951k8sdatapathconfighostfirewallwithvxlan testclient-host-7md7p 1/1 Running 0 47s 192.168.56.11 k8s1 202302081951k8sdatapathconfighostfirewallwithvxlan testclient-host-k7tvj 1/1 Running 0 47s 192.168.56.12 k8s2 202302081951k8sdatapathconfighostfirewallwithvxlan testclient-psclp 1/1 Running 0 47s 10.0.1.153 k8s2 202302081951k8sdatapathconfighostfirewallwithvxlan testserver-gk2f9 2/2 Running 0 47s 10.0.1.103 k8s2 202302081951k8sdatapathconfighostfirewallwithvxlan testserver-host-2vg62 2/2 Running 0 47s 192.168.56.11 k8s1 202302081951k8sdatapathconfighostfirewallwithvxlan testserver-host-7ghtm 2/2 Running 0 47s 192.168.56.12 k8s2 202302081951k8sdatapathconfighostfirewallwithvxlan testserver-m882z 2/2 Running 0 47s 10.0.0.11 k8s1 cilium-monitoring grafana-7fd557d749-xrljx 0/1 Running 0 32m 10.0.1.151 k8s2 cilium-monitoring prometheus-d87f8f984-lvlmk 1/1 Running 0 32m 10.0.1.22 k8s2 kube-system cilium-n4dj5 1/1 Running 0 3m3s 192.168.56.11 k8s1 kube-system cilium-operator-54f68d49ff-2fjkx 1/1 Running 0 3m3s 192.168.56.12 k8s2 kube-system cilium-operator-54f68d49ff-g8kbm 1/1 Running 0 3m3s 192.168.56.11 k8s1 kube-system cilium-sxjht 1/1 Running 0 3m3s 192.168.56.12 k8s2 kube-system coredns-8cfc78c54-jbrt6 1/1 Running 0 7m5s 10.0.1.54 k8s2 kube-system etcd-k8s1 1/1 Running 0 34m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 34m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 1 35m 192.168.56.11 k8s1 kube-system kube-proxy-mjht6 1/1 Running 0 35m 192.168.56.11 k8s1 kube-system kube-proxy-zjnr2 1/1 Running 0 33m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 2 35m 192.168.56.11 k8s1 kube-system log-gatherer-5z552 1/1 Running 0 32m 192.168.56.11 k8s1 kube-system log-gatherer-qdtd2 1/1 Running 0 32m 192.168.56.12 k8s2 kube-system registry-adder-9n4kq 1/1 Running 0 32m 192.168.56.12 k8s2 kube-system registry-adder-jpdzg 1/1 Running 0 32m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-n4dj5 cilium-sxjht] cmd: kubectl exec -n kube-system cilium-n4dj5 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.90 (v1.13.90-f256fad3) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.0.224, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 3050/65535 (4.65%), Flows/s: 37.62 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-02-08T19:52:05Z) Stderr: cmd: kubectl exec -n kube-system cilium-n4dj5 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 533 Disabled Disabled 4 reserved:health fd02::52 10.0.0.141 ready 1221 Disabled Disabled 5376 k8s:io.cilium.k8s.policy.cluster=default fd02::58 10.0.0.68 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302081951k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2791 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 2983 Disabled Disabled 63470 k8s:io.cilium.k8s.policy.cluster=default fd02::11 10.0.0.11 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302081951k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer Stderr: cmd: kubectl exec -n kube-system cilium-sxjht -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.90 (v1.13.90-f256fad3) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 33/33 healthy Proxy Status: OK, ip 10.0.1.145, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2440/65535 (3.72%), Flows/s: 14.35 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-02-08T19:51:31Z) Stderr: cmd: kubectl exec -n kube-system cilium-sxjht -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 421 Disabled Disabled 5376 k8s:io.cilium.k8s.policy.cluster=default fd02::127 10.0.1.153 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302081951k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 636 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 732 Disabled Disabled 63470 k8s:io.cilium.k8s.policy.cluster=default fd02::146 10.0.1.103 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302081951k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3183 Disabled Disabled 1821 k8s:io.cilium.k8s.policy.cluster=default fd02::132 10.0.1.54 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 3603 Disabled Disabled 4 reserved:health fd02::1db 10.0.1.135 ready Stderr: ===================== Exiting AfterFailed ===================== 19:52:30 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 19:52:30 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 19:52:30 STEP: Deleting deployment demo_hostfw.yaml 19:52:30 STEP: Deleting namespace 202302081951k8sdatapathconfighostfirewallwithvxlan 19:52:45 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|41530157_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//4057/artifact/41530157_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//4057/artifact/test_results_Cilium-PR-K8s-1.16-kernel-4.9_4057_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9/4057/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #23576 hit this flake with 95.32% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-02-09T17:31:17.192119688Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-02-09T17:31:17.192119688Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Cilium pods: [cilium-bxrpp cilium-c429z] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-mnz58 false false testclient-ps7b6 false false testserver-qb5w2 false false testserver-xmsmg false false coredns-69b675786c-bgfr8 false false Cilium agent 'cilium-bxrpp': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 Cilium agent 'cilium-c429z': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 32 Failed 0 ```
### Standard Error
Click to show. ```stack-error 17:29:08 STEP: Installing Cilium 17:29:10 STEP: Waiting for Cilium to become ready 17:30:05 STEP: Validating if Kubernetes DNS is deployed 17:30:05 STEP: Checking if deployment is ready 17:30:05 STEP: Checking if kube-dns service is plumbed correctly 17:30:05 STEP: Checking if pods have identity 17:30:05 STEP: Checking if DNS can resolve 17:30:10 STEP: Kubernetes DNS is not ready: 5s timeout expired 17:30:10 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 17:30:11 STEP: Waiting for Kubernetes DNS to become operational 17:30:11 STEP: Checking if deployment is ready 17:30:11 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 17:30:12 STEP: Checking if deployment is ready 17:30:12 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 17:30:13 STEP: Checking if deployment is ready 17:30:13 STEP: Checking if kube-dns service is plumbed correctly 17:30:13 STEP: Checking if DNS can resolve 17:30:13 STEP: Checking if pods have identity 17:30:18 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-bxrpp: unable to find service backend 10.0.0.187:53 in datapath of cilium pod cilium-bxrpp 17:30:18 STEP: Kubernetes DNS is not ready yet: 5s timeout expired 17:30:18 STEP: Checking if deployment is ready 17:30:18 STEP: Checking if kube-dns service is plumbed correctly 17:30:18 STEP: Checking if DNS can resolve 17:30:18 STEP: Checking if pods have identity 17:30:22 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist 17:30:22 STEP: Checking if deployment is ready 17:30:22 STEP: Checking if kube-dns service is plumbed correctly 17:30:22 STEP: Checking if DNS can resolve 17:30:22 STEP: Checking if pods have identity 17:30:26 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist 17:30:26 STEP: Checking if deployment is ready 17:30:26 STEP: Checking if kube-dns service is plumbed correctly 17:30:26 STEP: Checking if DNS can resolve 17:30:26 STEP: Checking if pods have identity 17:30:29 STEP: Validating Cilium Installation 17:30:29 STEP: Performing Cilium controllers preflight check 17:30:29 STEP: Performing Cilium health check 17:30:29 STEP: Checking whether host EP regenerated 17:30:29 STEP: Performing Cilium status preflight check 17:30:37 STEP: Performing Cilium service preflight check 17:30:37 STEP: Performing K8s service preflight check 17:30:43 STEP: Waiting for cilium-operator to be ready 17:30:43 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 17:30:43 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 17:30:43 STEP: Making sure all endpoints are in ready state 17:30:46 STEP: Creating namespace 202302091730k8sdatapathconfighostfirewallwithvxlan 17:30:46 STEP: Deploying demo_hostfw.yaml in namespace 202302091730k8sdatapathconfighostfirewallwithvxlan 17:30:46 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 17:30:46 STEP: WaitforNPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="") 17:30:50 STEP: WaitforNPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="") => 17:30:50 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 17:31:06 STEP: Checking host policies on egress to remote node 17:31:06 STEP: Checking host policies on ingress from local pod 17:31:06 STEP: Checking host policies on egress to remote pod 17:31:06 STEP: Checking host policies on ingress from remote node 17:31:06 STEP: Checking host policies on ingress from remote pod 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:31:06 STEP: Checking host policies on egress to local pod 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:31:06 STEP: WaitforPods(namespace="202302091730k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-02-09T17:31:12Z==== 17:31:12 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-02-09T17:31:17.192119688Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 17:31:21 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202302091730k8sdatapathconfighostfirewallwithvxlan testclient-host-jk2mr 1/1 Running 0 39s 192.168.56.11 k8s1 202302091730k8sdatapathconfighostfirewallwithvxlan testclient-host-njf9m 1/1 Running 0 39s 192.168.56.12 k8s2 202302091730k8sdatapathconfighostfirewallwithvxlan testclient-mnz58 1/1 Running 0 39s 10.0.1.198 k8s2 202302091730k8sdatapathconfighostfirewallwithvxlan testclient-ps7b6 1/1 Running 0 39s 10.0.0.159 k8s1 202302091730k8sdatapathconfighostfirewallwithvxlan testserver-host-gsc66 2/2 Running 0 39s 192.168.56.11 k8s1 202302091730k8sdatapathconfighostfirewallwithvxlan testserver-host-nkt27 2/2 Running 0 39s 192.168.56.12 k8s2 202302091730k8sdatapathconfighostfirewallwithvxlan testserver-qb5w2 2/2 Running 0 39s 10.0.1.161 k8s2 202302091730k8sdatapathconfighostfirewallwithvxlan testserver-xmsmg 2/2 Running 0 39s 10.0.0.210 k8s1 cilium-monitoring grafana-5747bcc8f9-k9xcb 0/1 Running 0 61m 10.0.0.240 k8s2 cilium-monitoring prometheus-655fb888d7-hh8zv 1/1 Running 0 61m 10.0.0.9 k8s2 kube-system cilium-bxrpp 1/1 Running 0 2m15s 192.168.56.11 k8s1 kube-system cilium-c429z 1/1 Running 0 2m15s 192.168.56.12 k8s2 kube-system cilium-operator-59576dbddc-cmwpc 1/1 Running 0 2m15s 192.168.56.12 k8s2 kube-system cilium-operator-59576dbddc-pv5fp 1/1 Running 0 2m15s 192.168.56.11 k8s1 kube-system coredns-69b675786c-bgfr8 1/1 Running 0 74s 10.0.1.114 k8s2 kube-system etcd-k8s1 1/1 Running 0 65m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 65m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 4 65m 192.168.56.11 k8s1 kube-system kube-proxy-s7gn7 1/1 Running 0 62m 192.168.56.12 k8s2 kube-system kube-proxy-tgx5l 1/1 Running 0 65m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 4 65m 192.168.56.11 k8s1 kube-system log-gatherer-jbllt 1/1 Running 0 61m 192.168.56.12 k8s2 kube-system log-gatherer-w2q29 1/1 Running 0 61m 192.168.56.11 k8s1 kube-system registry-adder-4zptv 1/1 Running 0 62m 192.168.56.12 k8s2 kube-system registry-adder-frf66 1/1 Running 0 62m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-bxrpp cilium-c429z] cmd: kubectl exec -n kube-system cilium-bxrpp -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.21 (v1.21.14) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0-rc5 (v1.13.0-rc5-8bb00683) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.0.218, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 3611/65535 (5.51%), Flows/s: 44.55 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-02-09T17:31:12Z) Stderr: cmd: kubectl exec -n kube-system cilium-bxrpp -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 240 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/control-plane k8s:node-role.kubernetes.io/master k8s:node.kubernetes.io/exclude-from-external-load-balancers k8s:status=lockdown reserved:host 636 Disabled Disabled 14836 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202302091730k8sdatapathconfighostfirewallwithvxlan fd02::a4 10.0.0.210 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302091730k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1906 Disabled Disabled 60779 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202302091730k8sdatapathconfighostfirewallwithvxlan fd02::8d 10.0.0.159 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302091730k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2157 Disabled Disabled 4 reserved:health fd02::2e 10.0.0.215 ready Stderr: cmd: kubectl exec -n kube-system cilium-c429z -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.21 (v1.21.14) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0-rc5 (v1.13.0-rc5-8bb00683) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 32/32 healthy Proxy Status: OK, ip 10.0.1.163, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2108/65535 (3.22%), Flows/s: 16.21 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-02-09T17:30:43Z) Stderr: cmd: kubectl exec -n kube-system cilium-c429z -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 455 Disabled Disabled 60779 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202302091730k8sdatapathconfighostfirewallwithvxlan fd02::1fa 10.0.1.198 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302091730k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 728 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 1387 Disabled Disabled 14836 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202302091730k8sdatapathconfighostfirewallwithvxlan fd02::117 10.0.1.161 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302091730k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1404 Disabled Disabled 52551 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system fd02::136 10.0.1.114 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 3079 Disabled Disabled 4 reserved:health fd02::11d 10.0.1.162 ready Stderr: ===================== Exiting AfterFailed ===================== 17:31:33 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 17:31:33 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 17:31:33 STEP: Deleting deployment demo_hostfw.yaml 17:31:33 STEP: Deleting namespace 202302091730k8sdatapathconfighostfirewallwithvxlan 17:31:48 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|41c08e00_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9//2387/artifact/41c08e00_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9//2387/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.9_2387_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9/2387/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #23956 hit this flake with 95.32% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-02-24T00:46:36.302117514Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-02-24T00:46:36.302117514Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-lbdtw cilium-vl648] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-kb2fn false false testserver-4z9cz false false testserver-d9wcp false false grafana-698dc95f6c-sn9fp false false prometheus-669755c8c5-g4pfv false false coredns-69b675786c-r7k2x false false testclient-k6flp false false Cilium agent 'cilium-lbdtw': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0 Cilium agent 'cilium-vl648': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 ```
### Standard Error
Click to show. ```stack-error 00:44:31 STEP: Installing Cilium 00:44:33 STEP: Waiting for Cilium to become ready 00:44:53 STEP: Validating if Kubernetes DNS is deployed 00:44:53 STEP: Checking if deployment is ready 00:44:53 STEP: Checking if kube-dns service is plumbed correctly 00:44:53 STEP: Checking if pods have identity 00:44:53 STEP: Checking if DNS can resolve 00:44:58 STEP: Kubernetes DNS is not ready: 5s timeout expired 00:44:58 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 00:44:59 STEP: Waiting for Kubernetes DNS to become operational 00:44:59 STEP: Checking if deployment is ready 00:44:59 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 00:45:00 STEP: Checking if deployment is ready 00:45:00 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 00:45:01 STEP: Checking if deployment is ready 00:45:01 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 00:45:02 STEP: Checking if deployment is ready 00:45:02 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 00:45:03 STEP: Checking if deployment is ready 00:45:03 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 00:45:04 STEP: Checking if deployment is ready 00:45:04 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 00:45:05 STEP: Checking if deployment is ready 00:45:05 STEP: Checking if kube-dns service is plumbed correctly 00:45:05 STEP: Checking if pods have identity 00:45:05 STEP: Checking if DNS can resolve 00:45:06 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-lbdtw: unable to find service backend 10.0.1.160:53 in datapath of cilium pod cilium-lbdtw 00:45:09 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist 00:45:09 STEP: Checking if deployment is ready 00:45:09 STEP: Checking if kube-dns service is plumbed correctly 00:45:09 STEP: Checking if pods have identity 00:45:09 STEP: Checking if DNS can resolve 00:45:12 STEP: Validating Cilium Installation 00:45:12 STEP: Performing Cilium controllers preflight check 00:45:12 STEP: Performing Cilium status preflight check 00:45:12 STEP: Performing Cilium health check 00:45:12 STEP: Checking whether host EP regenerated 00:45:20 STEP: Performing Cilium service preflight check 00:45:20 STEP: Performing K8s service preflight check 00:45:21 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-vl648': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init) Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 00:45:21 STEP: Performing Cilium controllers preflight check 00:45:21 STEP: Performing Cilium health check 00:45:21 STEP: Performing Cilium status preflight check 00:45:21 STEP: Checking whether host EP regenerated 00:45:28 STEP: Performing Cilium service preflight check 00:45:28 STEP: Performing K8s service preflight check 00:45:29 STEP: Performing Cilium controllers preflight check 00:45:29 STEP: Performing Cilium health check 00:45:29 STEP: Checking whether host EP regenerated 00:45:29 STEP: Performing Cilium status preflight check 00:45:37 STEP: Performing Cilium service preflight check 00:45:37 STEP: Performing K8s service preflight check 00:45:38 STEP: Performing Cilium controllers preflight check 00:45:38 STEP: Performing Cilium status preflight check 00:45:38 STEP: Performing Cilium health check 00:45:38 STEP: Checking whether host EP regenerated 00:45:45 STEP: Performing Cilium service preflight check 00:45:45 STEP: Performing K8s service preflight check 00:45:46 STEP: Performing Cilium controllers preflight check 00:45:46 STEP: Performing Cilium status preflight check 00:45:46 STEP: Performing Cilium health check 00:45:46 STEP: Checking whether host EP regenerated 00:45:54 STEP: Performing Cilium service preflight check 00:45:54 STEP: Performing K8s service preflight check 00:46:00 STEP: Waiting for cilium-operator to be ready 00:46:00 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 00:46:00 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 00:46:00 STEP: Making sure all endpoints are in ready state 00:46:03 STEP: Creating namespace 202302240046k8sdatapathconfighostfirewallwithvxlan 00:46:03 STEP: Deploying demo_hostfw.yaml in namespace 202302240046k8sdatapathconfighostfirewallwithvxlan 00:46:03 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 00:46:03 STEP: WaitforNPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="") 00:46:08 STEP: WaitforNPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="") => 00:46:08 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 00:46:32 STEP: Checking host policies on egress to remote node 00:46:32 STEP: Checking host policies on ingress from local pod 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 00:46:32 STEP: Checking host policies on ingress from remote node 00:46:32 STEP: Checking host policies on egress to remote pod 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 00:46:32 STEP: Checking host policies on ingress from remote pod 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 00:46:32 STEP: Checking host policies on egress to local pod 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 00:46:32 STEP: WaitforPods(namespace="202302240046k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-02-24T00:46:37Z==== 00:46:37 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-02-24T00:46:36.302117514Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 00:46:38 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202302240046k8sdatapathconfighostfirewallwithvxlan testclient-host-hq4vz 1/1 Running 0 39s 192.168.56.11 k8s1 202302240046k8sdatapathconfighostfirewallwithvxlan testclient-host-mj62m 1/1 Running 0 39s 192.168.56.12 k8s2 202302240046k8sdatapathconfighostfirewallwithvxlan testclient-k6flp 1/1 Running 0 39s 10.0.1.132 k8s1 202302240046k8sdatapathconfighostfirewallwithvxlan testclient-kb2fn 1/1 Running 0 39s 10.0.0.70 k8s2 202302240046k8sdatapathconfighostfirewallwithvxlan testserver-4z9cz 2/2 Running 0 39s 10.0.0.22 k8s2 202302240046k8sdatapathconfighostfirewallwithvxlan testserver-d9wcp 2/2 Running 0 39s 10.0.1.247 k8s1 202302240046k8sdatapathconfighostfirewallwithvxlan testserver-host-s8lpn 2/2 Running 0 39s 192.168.56.12 k8s2 202302240046k8sdatapathconfighostfirewallwithvxlan testserver-host-x5pms 2/2 Running 0 39s 192.168.56.11 k8s1 cilium-monitoring grafana-698dc95f6c-sn9fp 1/1 Running 0 40m 10.0.0.206 k8s2 cilium-monitoring prometheus-669755c8c5-g4pfv 1/1 Running 0 40m 10.0.0.220 k8s2 kube-system cilium-lbdtw 1/1 Running 0 2m9s 192.168.56.12 k8s2 kube-system cilium-operator-689587d795-4ss2r 1/1 Running 0 2m9s 192.168.56.11 k8s1 kube-system cilium-operator-689587d795-t69pd 1/1 Running 0 2m9s 192.168.56.12 k8s2 kube-system cilium-vl648 1/1 Running 0 2m9s 192.168.56.11 k8s1 kube-system coredns-69b675786c-r7k2x 1/1 Running 0 103s 10.0.0.186 k8s2 kube-system etcd-k8s1 1/1 Running 0 44m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 44m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 2 44m 192.168.56.11 k8s1 kube-system kube-proxy-ldzc5 1/1 Running 0 44m 192.168.56.11 k8s1 kube-system kube-proxy-ts5r2 1/1 Running 0 40m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 2 44m 192.168.56.11 k8s1 kube-system log-gatherer-gffmg 1/1 Running 0 40m 192.168.56.12 k8s2 kube-system log-gatherer-tpkl8 1/1 Running 0 40m 192.168.56.11 k8s1 kube-system registry-adder-6hsb7 1/1 Running 0 40m 192.168.56.11 k8s1 kube-system registry-adder-vmkp9 1/1 Running 0 40m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-lbdtw cilium-vl648] cmd: kubectl exec -n kube-system cilium-lbdtw -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.21 (v1.21.14) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-0c4012ac) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 42/42 healthy Proxy Status: OK, ip 10.0.0.153, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 3364/65535 (5.13%), Flows/s: 29.92 Metrics: Disabled Encryption: Disabled Cluster health: 1/2 reachable (2023-02-24T00:45:58Z) Name IP Node Endpoints k8s2 (localhost) 192.168.56.12 unknown unreachable Stderr: cmd: kubectl exec -n kube-system cilium-lbdtw -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 113 Disabled Disabled 62395 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system fd02::f3 10.0.0.186 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1494 Disabled Disabled 42140 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202302240046k8sdatapathconfighostfirewallwithvxlan fd02::d2 10.0.0.22 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302240046k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1817 Disabled Disabled 4 reserved:health fd02::ac 10.0.0.189 ready 2328 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 2692 Disabled Disabled 8850 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202302240046k8sdatapathconfighostfirewallwithvxlan fd02::b5 10.0.0.70 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302240046k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2798 Disabled Disabled 29727 k8s:app=grafana fd02::c0 10.0.0.206 ready k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 3347 Disabled Disabled 4909 k8s:app=prometheus fd02::2b 10.0.0.220 ready k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring Stderr: cmd: kubectl exec -n kube-system cilium-vl648 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.21 (v1.21.14) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-0c4012ac) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.1.87, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 7607/65535 (11.61%), Flows/s: 65.55 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-02-24T00:46:00Z) Stderr: cmd: kubectl exec -n kube-system cilium-vl648 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 718 Disabled Disabled 8850 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202302240046k8sdatapathconfighostfirewallwithvxlan fd02::138 10.0.1.132 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302240046k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 868 Disabled Disabled 42140 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202302240046k8sdatapathconfighostfirewallwithvxlan fd02::14e 10.0.1.247 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202302240046k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1380 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/control-plane k8s:node-role.kubernetes.io/master k8s:node.kubernetes.io/exclude-from-external-load-balancers k8s:status=lockdown reserved:host 2676 Disabled Disabled 4 reserved:health fd02::137 10.0.1.28 ready Stderr: ===================== Exiting AfterFailed ===================== 00:47:45 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 00:47:45 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 00:47:45 STEP: Deleting deployment demo_hostfw.yaml 00:47:45 STEP: Deleting namespace 202302240046k8sdatapathconfighostfirewallwithvxlan 00:48:01 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|d43c2403_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9//2430/artifact/d43c2403_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9//2430/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.9_2430_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9/2430/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24086 hit this flake with 95.32% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-02T14:59:41.088414880Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-02T14:59:41.088414880Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 ⚠️ Number of "level=warning" in logs: 6 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: Unable to restore endpoint, ignoring Key allocation attempt failed Cilium pods: [cilium-mm4ml cilium-nj8z4] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testserver-wkc6n false false coredns-85fbf8f7dd-w9kdd false false testclient-5d2mx false false testclient-8q5cw false false testserver-lb48k false false Cilium agent 'cilium-mm4ml': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 Cilium agent 'cilium-nj8z4': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0 ```
### Standard Error
Click to show. ```stack-error 14:57:40 STEP: Installing Cilium 14:57:42 STEP: Waiting for Cilium to become ready 14:58:01 STEP: Validating if Kubernetes DNS is deployed 14:58:01 STEP: Checking if deployment is ready 14:58:01 STEP: Checking if kube-dns service is plumbed correctly 14:58:01 STEP: Checking if DNS can resolve 14:58:01 STEP: Checking if pods have identity 14:58:05 STEP: Kubernetes DNS is up and operational 14:58:05 STEP: Validating Cilium Installation 14:58:05 STEP: Performing Cilium controllers preflight check 14:58:05 STEP: Performing Cilium status preflight check 14:58:05 STEP: Performing Cilium health check 14:58:05 STEP: Checking whether host EP regenerated 14:58:19 STEP: Performing Cilium service preflight check 14:58:19 STEP: Performing K8s service preflight check 14:58:19 STEP: Cilium is not ready yet: host EP is not ready: cilium-agent "cilium-mm4ml" host EP is not in ready state: "regenerating" 14:58:19 STEP: Performing Cilium controllers preflight check 14:58:19 STEP: Performing Cilium health check 14:58:19 STEP: Performing Cilium status preflight check 14:58:19 STEP: Checking whether host EP regenerated 14:58:26 STEP: Performing Cilium service preflight check 14:58:26 STEP: Performing K8s service preflight check 14:58:27 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-nj8z4': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init) Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 14:58:27 STEP: Performing Cilium controllers preflight check 14:58:27 STEP: Performing Cilium health check 14:58:27 STEP: Checking whether host EP regenerated 14:58:27 STEP: Performing Cilium status preflight check 14:58:35 STEP: Performing Cilium service preflight check 14:58:35 STEP: Performing K8s service preflight check 14:58:36 STEP: Performing Cilium controllers preflight check 14:58:36 STEP: Performing Cilium health check 14:58:36 STEP: Performing Cilium status preflight check 14:58:36 STEP: Checking whether host EP regenerated 14:58:43 STEP: Performing Cilium service preflight check 14:58:43 STEP: Performing K8s service preflight check 14:58:44 STEP: Performing Cilium status preflight check 14:58:44 STEP: Performing Cilium health check 14:58:44 STEP: Checking whether host EP regenerated 14:58:44 STEP: Performing Cilium controllers preflight check 14:58:52 STEP: Performing Cilium service preflight check 14:58:52 STEP: Performing K8s service preflight check 14:58:53 STEP: Performing Cilium status preflight check 14:58:53 STEP: Performing Cilium health check 14:58:53 STEP: Checking whether host EP regenerated 14:58:53 STEP: Performing Cilium controllers preflight check 14:59:00 STEP: Performing Cilium service preflight check 14:59:00 STEP: Performing K8s service preflight check 14:59:06 STEP: Waiting for cilium-operator to be ready 14:59:06 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 14:59:06 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 14:59:06 STEP: Making sure all endpoints are in ready state 14:59:09 STEP: Creating namespace 202303021459k8sdatapathconfighostfirewallwithvxlan 14:59:09 STEP: Deploying demo_hostfw.yaml in namespace 202303021459k8sdatapathconfighostfirewallwithvxlan 14:59:09 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 14:59:09 STEP: WaitforNPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="") 14:59:13 STEP: WaitforNPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="") => 14:59:13 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 14:59:38 STEP: Checking host policies on egress to remote node 14:59:38 STEP: Checking host policies on egress to local pod 14:59:38 STEP: Checking host policies on egress to remote pod 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:59:38 STEP: Checking host policies on ingress from remote node 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:59:38 STEP: Checking host policies on ingress from local pod 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:59:38 STEP: Checking host policies on ingress from remote pod 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:59:38 STEP: WaitforPods(namespace="202303021459k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => === Test Finished at 2023-03-02T14:59:44Z==== 14:59:44 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-02T14:59:41.088414880Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 14:59:44 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303021459k8sdatapathconfighostfirewallwithvxlan testclient-5d2mx 1/1 Running 0 40s 10.0.0.60 k8s1 202303021459k8sdatapathconfighostfirewallwithvxlan testclient-8q5cw 1/1 Running 0 40s 10.0.1.158 k8s2 202303021459k8sdatapathconfighostfirewallwithvxlan testclient-host-s5qx4 1/1 Running 0 40s 192.168.56.12 k8s2 202303021459k8sdatapathconfighostfirewallwithvxlan testclient-host-wxxcj 1/1 Running 0 40s 192.168.56.11 k8s1 202303021459k8sdatapathconfighostfirewallwithvxlan testserver-host-6w9vx 2/2 Running 0 40s 192.168.56.11 k8s1 202303021459k8sdatapathconfighostfirewallwithvxlan testserver-host-grxjm 2/2 Running 0 40s 192.168.56.12 k8s2 202303021459k8sdatapathconfighostfirewallwithvxlan testserver-lb48k 2/2 Running 0 40s 10.0.0.120 k8s1 202303021459k8sdatapathconfighostfirewallwithvxlan testserver-wkc6n 2/2 Running 0 40s 10.0.1.219 k8s2 cilium-monitoring grafana-698dc95f6c-w8sbp 0/1 Running 0 59m 10.0.0.6 k8s2 cilium-monitoring prometheus-669755c8c5-bvq2h 1/1 Running 0 59m 10.0.0.31 k8s2 kube-system cilium-mm4ml 1/1 Running 0 2m7s 192.168.56.12 k8s2 kube-system cilium-nj8z4 1/1 Running 0 2m7s 192.168.56.11 k8s1 kube-system cilium-operator-b55bfcd9-6bwlv 1/1 Running 1 2m7s 192.168.56.12 k8s2 kube-system cilium-operator-b55bfcd9-7fkld 1/1 Running 0 2m7s 192.168.56.11 k8s1 kube-system coredns-85fbf8f7dd-w9kdd 1/1 Running 0 5m29s 10.0.0.165 k8s1 kube-system etcd-k8s1 1/1 Running 0 63m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 63m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 4 63m 192.168.56.11 k8s1 kube-system kube-proxy-fvdwz 1/1 Running 0 63m 192.168.56.11 k8s1 kube-system kube-proxy-fxk6n 1/1 Running 0 60m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 4 63m 192.168.56.11 k8s1 kube-system log-gatherer-7tlbr 1/1 Running 0 59m 192.168.56.12 k8s2 kube-system log-gatherer-8pwmk 1/1 Running 0 59m 192.168.56.11 k8s1 kube-system registry-adder-c5z5t 1/1 Running 0 60m 192.168.56.12 k8s2 kube-system registry-adder-g9g8b 1/1 Running 0 60m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-mm4ml cilium-nj8z4] cmd: kubectl exec -n kube-system cilium-mm4ml -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.21 (v1.21.14) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-fafaa647) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.1.118, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2167/65535 (3.31%), Flows/s: 20.23 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-02T14:58:59Z) Stderr: cmd: kubectl exec -n kube-system cilium-mm4ml -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 91 Disabled Disabled 4 reserved:health fd02::1ca 10.0.1.18 ready 179 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 2037 Disabled Disabled 39170 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303021459k8sdatapathconfighostfirewallwithvxlan fd02::1cb 10.0.1.219 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303021459k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3457 Disabled Disabled 11900 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303021459k8sdatapathconfighostfirewallwithvxlan fd02::1c2 10.0.1.158 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303021459k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient Stderr: cmd: kubectl exec -n kube-system cilium-nj8z4 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.21 (v1.21.14) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-fafaa647) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 33/33 healthy Proxy Status: OK, ip 10.0.0.50, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 6895/65535 (10.52%), Flows/s: 59.50 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-02T14:59:06Z) Stderr: cmd: kubectl exec -n kube-system cilium-nj8z4 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 31 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/control-plane k8s:node-role.kubernetes.io/master k8s:node.kubernetes.io/exclude-from-external-load-balancers k8s:status=lockdown reserved:host 69 Disabled Disabled 39170 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303021459k8sdatapathconfighostfirewallwithvxlan fd02::5f 10.0.0.120 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303021459k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 207 Disabled Disabled 4 reserved:health fd02::20 10.0.0.82 ready 1808 Disabled Disabled 57223 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system fd02::47 10.0.0.165 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 2721 Disabled Disabled 11900 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303021459k8sdatapathconfighostfirewallwithvxlan fd02::12 10.0.0.60 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303021459k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient Stderr: ===================== Exiting AfterFailed ===================== 14:59:57 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 14:59:57 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 14:59:57 STEP: Deleting deployment demo_hostfw.yaml 14:59:57 STEP: Deleting namespace 202303021459k8sdatapathconfighostfirewallwithvxlan 15:00:12 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|2e4df546_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9//2445/artifact/2e4df546_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9//2445/artifact/324bf709_K8sAgentPolicyTest_Multi-node_policy_test_validates_fromEntities_policies_with_remote-node_identity_disabled_Allows_from_all_hosts_with_cnp_fromEntities_host_policy.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9//2445/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.9_2445_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9/2445/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24105 hit this flake with 95.32% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-06T14:04:58.542227092Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-06T14:04:58.542227092Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-8rjg8 cilium-zfvpf] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-nfh7f false false testclient-npg24 false false testserver-ljn9l false false testserver-vhfbz false false coredns-85fbf8f7dd-vrfjg false false Cilium agent 'cilium-8rjg8': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 Cilium agent 'cilium-zfvpf': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 32 Failed 0 ```
### Standard Error
Click to show. ```stack-error 14:03:02 STEP: Installing Cilium 14:03:05 STEP: Waiting for Cilium to become ready 14:03:21 STEP: Validating if Kubernetes DNS is deployed 14:03:21 STEP: Checking if deployment is ready 14:03:21 STEP: Checking if kube-dns service is plumbed correctly 14:03:21 STEP: Checking if pods have identity 14:03:21 STEP: Checking if DNS can resolve 14:03:26 STEP: Kubernetes DNS is not ready: 5s timeout expired 14:03:26 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 14:03:26 STEP: Waiting for Kubernetes DNS to become operational 14:03:26 STEP: Checking if deployment is ready 14:03:26 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:03:27 STEP: Checking if deployment is ready 14:03:27 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:03:28 STEP: Checking if deployment is ready 14:03:28 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:03:29 STEP: Checking if deployment is ready 14:03:29 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:03:30 STEP: Checking if deployment is ready 14:03:30 STEP: Checking if kube-dns service is plumbed correctly 14:03:30 STEP: Checking if pods have identity 14:03:30 STEP: Checking if DNS can resolve 14:03:36 STEP: Validating Cilium Installation 14:03:36 STEP: Performing Cilium controllers preflight check 14:03:36 STEP: Performing Cilium health check 14:03:36 STEP: Performing Cilium status preflight check 14:03:36 STEP: Checking whether host EP regenerated 14:03:43 STEP: Performing Cilium service preflight check 14:03:43 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-8rjg8': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init) Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 14:03:43 STEP: Performing Cilium controllers preflight check 14:03:43 STEP: Performing Cilium health check 14:03:43 STEP: Performing Cilium status preflight check 14:03:43 STEP: Checking whether host EP regenerated 14:03:51 STEP: Performing Cilium service preflight check 14:03:51 STEP: Performing K8s service preflight check 14:03:51 STEP: Performing Cilium status preflight check 14:03:51 STEP: Performing Cilium health check 14:03:51 STEP: Checking whether host EP regenerated 14:03:51 STEP: Performing Cilium controllers preflight check 14:03:59 STEP: Performing Cilium service preflight check 14:03:59 STEP: Performing K8s service preflight check 14:03:59 STEP: Performing Cilium controllers preflight check 14:03:59 STEP: Performing Cilium status preflight check 14:03:59 STEP: Performing Cilium health check 14:03:59 STEP: Checking whether host EP regenerated 14:04:06 STEP: Performing Cilium service preflight check 14:04:06 STEP: Performing K8s service preflight check 14:04:06 STEP: Performing Cilium status preflight check 14:04:06 STEP: Performing Cilium health check 14:04:06 STEP: Performing Cilium controllers preflight check 14:04:06 STEP: Checking whether host EP regenerated 14:04:14 STEP: Performing Cilium service preflight check 14:04:14 STEP: Performing K8s service preflight check 14:04:14 STEP: Performing Cilium controllers preflight check 14:04:14 STEP: Performing Cilium status preflight check 14:04:14 STEP: Performing Cilium health check 14:04:14 STEP: Checking whether host EP regenerated 14:04:21 STEP: Performing Cilium service preflight check 14:04:21 STEP: Performing K8s service preflight check 14:04:24 STEP: Waiting for cilium-operator to be ready 14:04:24 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 14:04:24 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 14:04:24 STEP: Making sure all endpoints are in ready state 14:04:26 STEP: Creating namespace 202303061404k8sdatapathconfighostfirewallwithvxlan 14:04:26 STEP: Deploying demo_hostfw.yaml in namespace 202303061404k8sdatapathconfighostfirewallwithvxlan 14:04:27 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 14:04:27 STEP: WaitforNPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="") 14:04:30 STEP: WaitforNPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="") => 14:04:30 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 14:04:55 STEP: Checking host policies on egress to local pod 14:04:55 STEP: Checking host policies on ingress from remote node 14:04:55 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:04:55 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:04:55 STEP: Checking host policies on ingress from remote pod 14:04:55 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 14:04:55 STEP: Checking host policies on ingress from local pod 14:04:55 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 14:04:55 STEP: Checking host policies on egress to remote node 14:04:55 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:04:55 STEP: Checking host policies on egress to remote pod 14:04:55 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:04:55 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:04:55 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 14:04:55 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:04:55 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 14:04:55 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:04:55 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:04:56 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 14:04:56 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:04:56 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:04:56 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:04:56 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:04:56 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 14:04:56 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 14:04:56 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:04:56 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:04:56 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:04:56 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:04:56 STEP: WaitforPods(namespace="202303061404k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => === Test Finished at 2023-03-06T14:05:01Z==== 14:05:01 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-06T14:04:58.542227092Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 14:05:01 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303061404k8sdatapathconfighostfirewallwithvxlan testclient-host-6528k 1/1 Running 0 39s 192.168.56.12 k8s2 202303061404k8sdatapathconfighostfirewallwithvxlan testclient-host-hs6j7 1/1 Running 0 39s 192.168.56.11 k8s1 202303061404k8sdatapathconfighostfirewallwithvxlan testclient-nfh7f 1/1 Running 0 39s 10.0.1.19 k8s1 202303061404k8sdatapathconfighostfirewallwithvxlan testclient-npg24 1/1 Running 0 39s 10.0.0.211 k8s2 202303061404k8sdatapathconfighostfirewallwithvxlan testserver-host-9t6rb 2/2 Running 0 39s 192.168.56.11 k8s1 202303061404k8sdatapathconfighostfirewallwithvxlan testserver-host-jxljz 2/2 Running 0 39s 192.168.56.12 k8s2 202303061404k8sdatapathconfighostfirewallwithvxlan testserver-ljn9l 2/2 Running 0 39s 10.0.0.151 k8s2 202303061404k8sdatapathconfighostfirewallwithvxlan testserver-vhfbz 2/2 Running 0 39s 10.0.1.130 k8s1 cilium-monitoring grafana-698dc95f6c-x5bcl 0/1 Running 0 60m 10.0.0.27 k8s2 cilium-monitoring prometheus-669755c8c5-x689b 1/1 Running 0 60m 10.0.0.8 k8s2 kube-system cilium-8rjg8 1/1 Running 0 2m1s 192.168.56.11 k8s1 kube-system cilium-operator-567cc86d7b-bk79s 1/1 Running 0 2m1s 192.168.56.12 k8s2 kube-system cilium-operator-567cc86d7b-x599j 1/1 Running 0 2m1s 192.168.56.11 k8s1 kube-system cilium-zfvpf 1/1 Running 0 2m1s 192.168.56.12 k8s2 kube-system coredns-85fbf8f7dd-vrfjg 1/1 Running 0 100s 10.0.0.203 k8s2 kube-system etcd-k8s1 1/1 Running 0 64m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 64m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 2 64m 192.168.56.11 k8s1 kube-system kube-proxy-8kz87 1/1 Running 0 63m 192.168.56.11 k8s1 kube-system kube-proxy-dqbbs 1/1 Running 0 61m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 3 64m 192.168.56.11 k8s1 kube-system log-gatherer-4k5hv 1/1 Running 0 60m 192.168.56.11 k8s1 kube-system log-gatherer-4nrwt 1/1 Running 0 60m 192.168.56.12 k8s2 kube-system registry-adder-fmmkp 1/1 Running 0 61m 192.168.56.11 k8s1 kube-system registry-adder-k89kb 1/1 Running 0 61m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-8rjg8 cilium-zfvpf] cmd: kubectl exec -n kube-system cilium-8rjg8 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.21 (v1.21.14) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-6d5a5547) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.1.148, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 6118/65535 (9.34%), Flows/s: 54.97 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-06T14:04:17Z) Stderr: cmd: kubectl exec -n kube-system cilium-8rjg8 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 678 Disabled Disabled 4 reserved:health fd02::1e2 10.0.1.181 ready 1248 Disabled Disabled 5176 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303061404k8sdatapathconfighostfirewallwithvxlan fd02::11d 10.0.1.130 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303061404k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3436 Disabled Disabled 33456 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303061404k8sdatapathconfighostfirewallwithvxlan fd02::15f 10.0.1.19 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303061404k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 3805 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/control-plane k8s:node-role.kubernetes.io/master k8s:node.kubernetes.io/exclude-from-external-load-balancers k8s:status=lockdown reserved:host Stderr: cmd: kubectl exec -n kube-system cilium-zfvpf -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.21 (v1.21.14) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-6d5a5547) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 32/32 healthy Proxy Status: OK, ip 10.0.0.246, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2184/65535 (3.33%), Flows/s: 20.25 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-06T14:04:23Z) Stderr: cmd: kubectl exec -n kube-system cilium-zfvpf -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 426 Disabled Disabled 39411 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system fd02::76 10.0.0.203 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 545 Disabled Disabled 33456 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303061404k8sdatapathconfighostfirewallwithvxlan fd02::2e 10.0.0.211 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303061404k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 1155 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 2235 Disabled Disabled 5176 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303061404k8sdatapathconfighostfirewallwithvxlan fd02::c3 10.0.0.151 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303061404k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 2701 Disabled Disabled 4 reserved:health fd02::cd 10.0.0.253 ready Stderr: ===================== Exiting AfterFailed ===================== 14:05:52 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 14:05:52 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 14:05:52 STEP: Deleting deployment demo_hostfw.yaml 14:05:52 STEP: Deleting namespace 202303061404k8sdatapathconfighostfirewallwithvxlan 14:06:07 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|34d3d666_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9//2452/artifact/34d3d666_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9//2452/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.9_2452_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9/2452/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24105 hit this flake with 95.87% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-06T14:53:25.161079671Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-06T14:53:25.161079671Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 0 ⚠️ Number of "level=warning" in logs: 6 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: Unable to restore endpoint, ignoring Key allocation attempt failed Cilium pods: [cilium-dc6dw cilium-fqxz8] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress grafana-7ddfc74b5b-z5mr6 false false prometheus-669755c8c5-dnsg6 false false coredns-bb76b858c-clzzd false false testclient-kd68n false false testclient-mpk4l false false testserver-8f8r9 false false testserver-bm9f7 false false Cilium agent 'cilium-dc6dw': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0 Cilium agent 'cilium-fqxz8': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0 ```
### Standard Error
Click to show. ```stack-error 14:51:18 STEP: Installing Cilium 14:51:20 STEP: Waiting for Cilium to become ready 14:51:58 STEP: Validating if Kubernetes DNS is deployed 14:51:58 STEP: Checking if deployment is ready 14:51:58 STEP: Checking if kube-dns service is plumbed correctly 14:51:58 STEP: Checking if pods have identity 14:51:58 STEP: Checking if DNS can resolve 14:52:02 STEP: Kubernetes DNS is up and operational 14:52:02 STEP: Validating Cilium Installation 14:52:02 STEP: Performing Cilium controllers preflight check 14:52:02 STEP: Performing Cilium health check 14:52:02 STEP: Performing Cilium status preflight check 14:52:02 STEP: Checking whether host EP regenerated 14:52:09 STEP: Performing Cilium service preflight check 14:52:09 STEP: Performing K8s service preflight check 14:52:10 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-fqxz8': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 14:52:10 STEP: Performing Cilium controllers preflight check 14:52:10 STEP: Performing Cilium status preflight check 14:52:10 STEP: Performing Cilium health check 14:52:10 STEP: Checking whether host EP regenerated 14:52:18 STEP: Performing Cilium service preflight check 14:52:18 STEP: Performing K8s service preflight check 14:52:19 STEP: Performing Cilium controllers preflight check 14:52:19 STEP: Performing Cilium health check 14:52:19 STEP: Checking whether host EP regenerated 14:52:19 STEP: Performing Cilium status preflight check 14:52:26 STEP: Performing Cilium service preflight check 14:52:26 STEP: Performing K8s service preflight check 14:52:27 STEP: Performing Cilium controllers preflight check 14:52:27 STEP: Performing Cilium status preflight check 14:52:27 STEP: Performing Cilium health check 14:52:27 STEP: Checking whether host EP regenerated 14:52:35 STEP: Performing Cilium service preflight check 14:52:35 STEP: Performing K8s service preflight check 14:52:40 STEP: Waiting for cilium-operator to be ready 14:52:41 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 14:52:41 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 14:52:41 STEP: Making sure all endpoints are in ready state 14:52:43 STEP: Creating namespace 202303061452k8sdatapathconfighostfirewallwithvxlan 14:52:43 STEP: Deploying demo_hostfw.yaml in namespace 202303061452k8sdatapathconfighostfirewallwithvxlan 14:52:44 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 14:52:44 STEP: WaitforNPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="") 14:52:53 STEP: WaitforNPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="") => 14:52:53 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 14:53:19 STEP: Checking host policies on egress to remote node 14:53:19 STEP: Checking host policies on ingress from remote pod 14:53:19 STEP: Checking host policies on ingress from remote node 14:53:19 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:53:19 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 14:53:19 STEP: Checking host policies on ingress from local pod 14:53:19 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 14:53:19 STEP: Checking host policies on egress to remote pod 14:53:19 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:53:19 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:53:19 STEP: Checking host policies on egress to local pod 14:53:19 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 14:53:20 STEP: WaitforPods(namespace="202303061452k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => === Test Finished at 2023-03-06T14:53:25Z==== 14:53:25 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-06T14:53:25.161079671Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 14:53:26 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303061452k8sdatapathconfighostfirewallwithvxlan testclient-host-9tb67 1/1 Running 0 46s 192.168.56.12 k8s2 202303061452k8sdatapathconfighostfirewallwithvxlan testclient-host-tx5lg 1/1 Running 0 46s 192.168.56.11 k8s1 202303061452k8sdatapathconfighostfirewallwithvxlan testclient-kd68n 1/1 Running 0 46s 10.0.0.211 k8s1 202303061452k8sdatapathconfighostfirewallwithvxlan testclient-mpk4l 1/1 Running 0 46s 10.0.1.237 k8s2 202303061452k8sdatapathconfighostfirewallwithvxlan testserver-8f8r9 2/2 Running 0 46s 10.0.1.195 k8s2 202303061452k8sdatapathconfighostfirewallwithvxlan testserver-bm9f7 2/2 Running 0 46s 10.0.0.199 k8s1 202303061452k8sdatapathconfighostfirewallwithvxlan testserver-host-6kdtz 2/2 Running 0 46s 192.168.56.12 k8s2 202303061452k8sdatapathconfighostfirewallwithvxlan testserver-host-hw4s6 2/2 Running 0 46s 192.168.56.11 k8s1 cilium-monitoring grafana-7ddfc74b5b-z5mr6 1/1 Running 0 16m 10.0.1.254 k8s2 cilium-monitoring prometheus-669755c8c5-dnsg6 1/1 Running 0 16m 10.0.1.139 k8s2 kube-system cilium-dc6dw 1/1 Running 0 2m10s 192.168.56.12 k8s2 kube-system cilium-fqxz8 1/1 Running 0 2m10s 192.168.56.11 k8s1 kube-system cilium-operator-55b4476945-2pt4c 1/1 Running 1 2m10s 192.168.56.12 k8s2 kube-system cilium-operator-55b4476945-vwvmz 1/1 Running 0 2m10s 192.168.56.11 k8s1 kube-system coredns-bb76b858c-clzzd 1/1 Running 0 7m38s 10.0.0.218 k8s1 kube-system etcd-k8s1 1/1 Running 0 20m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 20m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 2 20m 192.168.56.11 k8s1 kube-system kube-proxy-8vx2w 1/1 Running 0 17m 192.168.56.12 k8s2 kube-system kube-proxy-crvdt 1/1 Running 0 20m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 2 20m 192.168.56.11 k8s1 kube-system log-gatherer-7lgcd 1/1 Running 0 17m 192.168.56.11 k8s1 kube-system log-gatherer-ljjpf 1/1 Running 0 17m 192.168.56.12 k8s2 kube-system registry-adder-7gqhw 1/1 Running 0 17m 192.168.56.11 k8s1 kube-system registry-adder-8gltz 1/1 Running 0 17m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-dc6dw cilium-fqxz8] cmd: kubectl exec -n kube-system cilium-dc6dw -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.20 (v1.20.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-6d5a5547) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 6/254 allocated from 10.0.1.0/24, IPv6: 6/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 38/38 healthy Proxy Status: OK, ip 10.0.1.234, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2553/65535 (3.90%), Flows/s: 21.71 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-06T14:52:34Z) Stderr: cmd: kubectl exec -n kube-system cilium-dc6dw -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 237 Disabled Disabled 4 reserved:health fd02::19f 10.0.1.9 ready 515 Disabled Disabled 10044 k8s:app=prometheus fd02::172 10.0.1.139 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 1432 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 1952 Disabled Disabled 20420 k8s:app=grafana fd02::1b0 10.0.1.254 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 3663 Disabled Disabled 43936 k8s:io.cilium.k8s.policy.cluster=default fd02::17f 10.0.1.195 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303061452k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3745 Disabled Disabled 52285 k8s:io.cilium.k8s.policy.cluster=default fd02::1e0 10.0.1.237 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303061452k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient Stderr: cmd: kubectl exec -n kube-system cilium-fqxz8 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.20 (v1.20.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-6d5a5547) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 33/33 healthy Proxy Status: OK, ip 10.0.0.50, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 6994/65535 (10.67%), Flows/s: 57.70 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-06T14:52:40Z) Stderr: cmd: kubectl exec -n kube-system cilium-fqxz8 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 789 Disabled Disabled 52285 k8s:io.cilium.k8s.policy.cluster=default fd02::a6 10.0.0.211 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303061452k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2735 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/control-plane k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 2904 Disabled Disabled 43936 k8s:io.cilium.k8s.policy.cluster=default fd02::8c 10.0.0.199 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303061452k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3371 Disabled Disabled 4 reserved:health fd02::55 10.0.0.226 ready 4032 Disabled Disabled 12590 k8s:io.cilium.k8s.policy.cluster=default fd02::5a 10.0.0.218 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns Stderr: ===================== Exiting AfterFailed ===================== 14:53:39 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 14:53:39 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 14:53:39 STEP: Deleting deployment demo_hostfw.yaml 14:53:39 STEP: Deleting namespace 202303061452k8sdatapathconfighostfirewallwithvxlan 14:53:54 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|27621a6f_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9//1675/artifact/27621a6f_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9//1675/artifact/test_results_Cilium-PR-K8s-1.20-kernel-4.9_1675_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9/1675/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24104 hit this flake with 95.87% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-06T20:08:20.673606935Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-06T20:08:20.673606935Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-qd9rb cilium-v6vsx] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress coredns-bb76b858c-mld99 false false testclient-2jtm9 false false testclient-ff5ds false false testserver-84q2t false false testserver-jpxvg false false grafana-7ddfc74b5b-82jsr false false prometheus-669755c8c5-xhsk8 false false Cilium agent 'cilium-qd9rb': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0 Cilium agent 'cilium-v6vsx': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0 ```
### Standard Error
Click to show. ```stack-error 20:06:39 STEP: Installing Cilium 20:06:41 STEP: Waiting for Cilium to become ready 20:07:20 STEP: Validating if Kubernetes DNS is deployed 20:07:20 STEP: Checking if deployment is ready 20:07:20 STEP: Checking if kube-dns service is plumbed correctly 20:07:20 STEP: Checking if pods have identity 20:07:20 STEP: Checking if DNS can resolve 20:07:24 STEP: Kubernetes DNS is up and operational 20:07:24 STEP: Validating Cilium Installation 20:07:24 STEP: Performing Cilium controllers preflight check 20:07:24 STEP: Performing Cilium status preflight check 20:07:24 STEP: Performing Cilium health check 20:07:24 STEP: Checking whether host EP regenerated 20:07:32 STEP: Performing Cilium service preflight check 20:07:32 STEP: Performing K8s service preflight check 20:07:38 STEP: Waiting for cilium-operator to be ready 20:07:38 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 20:07:38 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 20:07:38 STEP: Making sure all endpoints are in ready state 20:07:41 STEP: Creating namespace 202303062007k8sdatapathconfighostfirewallwithvxlan 20:07:41 STEP: Deploying demo_hostfw.yaml in namespace 202303062007k8sdatapathconfighostfirewallwithvxlan 20:07:41 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 20:07:41 STEP: WaitforNPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="") 20:07:53 STEP: WaitforNPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="") => 20:07:53 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 20:08:21 STEP: Checking host policies on ingress from local pod 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 20:08:21 STEP: Checking host policies on egress to local pod 20:08:21 STEP: Checking host policies on ingress from remote pod 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 20:08:21 STEP: Checking host policies on ingress from remote node 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 20:08:21 STEP: Checking host policies on egress to remote node 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 20:08:21 STEP: Checking host policies on egress to remote pod 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 20:08:21 STEP: WaitforPods(namespace="202303062007k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-03-06T20:08:27Z==== 20:08:27 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-06T20:08:20.673606935Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 20:08:27 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303062007k8sdatapathconfighostfirewallwithvxlan testclient-2jtm9 1/1 Running 0 51s 10.0.0.237 k8s1 202303062007k8sdatapathconfighostfirewallwithvxlan testclient-ff5ds 1/1 Running 0 51s 10.0.1.2 k8s2 202303062007k8sdatapathconfighostfirewallwithvxlan testclient-host-7jtqg 1/1 Running 0 51s 192.168.56.11 k8s1 202303062007k8sdatapathconfighostfirewallwithvxlan testclient-host-9v6m5 1/1 Running 0 51s 192.168.56.12 k8s2 202303062007k8sdatapathconfighostfirewallwithvxlan testserver-84q2t 2/2 Running 0 51s 10.0.0.154 k8s1 202303062007k8sdatapathconfighostfirewallwithvxlan testserver-host-bxmtm 2/2 Running 0 51s 192.168.56.12 k8s2 202303062007k8sdatapathconfighostfirewallwithvxlan testserver-host-nn59p 2/2 Running 0 51s 192.168.56.11 k8s1 202303062007k8sdatapathconfighostfirewallwithvxlan testserver-jpxvg 2/2 Running 0 51s 10.0.1.210 k8s2 cilium-monitoring grafana-7ddfc74b5b-82jsr 1/1 Running 0 35m 10.0.1.155 k8s2 cilium-monitoring prometheus-669755c8c5-xhsk8 1/1 Running 0 35m 10.0.1.217 k8s2 kube-system cilium-operator-7dcdc9df8d-9lxhk 1/1 Running 0 111s 192.168.56.12 k8s2 kube-system cilium-operator-7dcdc9df8d-nrrbp 1/1 Running 0 111s 192.168.56.11 k8s1 kube-system cilium-qd9rb 1/1 Running 0 111s 192.168.56.11 k8s1 kube-system cilium-v6vsx 1/1 Running 0 111s 192.168.56.12 k8s2 kube-system coredns-bb76b858c-mld99 1/1 Running 0 7m14s 10.0.0.96 k8s1 kube-system etcd-k8s1 1/1 Running 0 40m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 40m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 6 40m 192.168.56.11 k8s1 kube-system kube-proxy-brgqb 1/1 Running 0 36m 192.168.56.11 k8s1 kube-system kube-proxy-d8xwt 1/1 Running 0 36m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 5 40m 192.168.56.11 k8s1 kube-system log-gatherer-bcxp7 1/1 Running 0 35m 192.168.56.11 k8s1 kube-system log-gatherer-v7d8t 1/1 Running 0 35m 192.168.56.12 k8s2 kube-system registry-adder-x5c9q 1/1 Running 0 36m 192.168.56.11 k8s1 kube-system registry-adder-znpfj 1/1 Running 0 36m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-qd9rb cilium-v6vsx] cmd: kubectl exec -n kube-system cilium-qd9rb -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.19 (v1.19.16) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-9472954e) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 33/33 healthy Proxy Status: OK, ip 10.0.0.91, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 5465/65535 (8.34%), Flows/s: 54.20 Metrics: Disabled Encryption: Disabled Cluster health: 1/2 reachable (2023-03-06T20:07:59Z) Name IP Node Endpoints k8s1 (localhost) 192.168.56.11 unknown unreachable Stderr: cmd: kubectl exec -n kube-system cilium-qd9rb -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 218 Disabled Disabled 1950 k8s:io.cilium.k8s.policy.cluster=default fd02::1b 10.0.0.96 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 984 Disabled Disabled 4 reserved:health fd02::5d 10.0.0.33 ready 1147 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 1736 Disabled Disabled 26440 k8s:io.cilium.k8s.policy.cluster=default fd02::4e 10.0.0.237 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303062007k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2343 Disabled Disabled 37097 k8s:io.cilium.k8s.policy.cluster=default fd02::e6 10.0.0.154 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303062007k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer Stderr: cmd: kubectl exec -n kube-system cilium-v6vsx -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.19 (v1.19.16) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-9472954e) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 6/254 allocated from 10.0.1.0/24, IPv6: 6/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 38/38 healthy Proxy Status: OK, ip 10.0.1.244, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2335/65535 (3.56%), Flows/s: 22.79 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-06T20:07:38Z) Stderr: cmd: kubectl exec -n kube-system cilium-v6vsx -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 307 Disabled Disabled 43930 k8s:app=prometheus fd02::194 10.0.1.217 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 510 Disabled Disabled 26440 k8s:io.cilium.k8s.policy.cluster=default fd02::14b 10.0.1.2 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303062007k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 738 Disabled Disabled 4 reserved:health fd02::135 10.0.1.14 ready 2066 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 2509 Disabled Disabled 37097 k8s:io.cilium.k8s.policy.cluster=default fd02::152 10.0.1.210 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303062007k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3937 Disabled Disabled 10019 k8s:app=grafana fd02::1f3 10.0.1.155 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring Stderr: ===================== Exiting AfterFailed ===================== 20:09:07 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 20:09:07 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 20:09:07 STEP: Deleting deployment demo_hostfw.yaml 20:09:08 STEP: Deleting namespace 202303062007k8sdatapathconfighostfirewallwithvxlan 20:09:23 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|ef4d0664_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//2686/artifact/ef4d0664_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//2686/artifact/test_results_Cilium-PR-K8s-1.19-kernel-4.9_2686_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9/2686/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24331 hit this flake with 95.87% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.17-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-13T14:11:21.219429329Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.17-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-13T14:11:21.219429329Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 ⚠️ Number of "level=warning" in logs: 6 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: Unable to restore endpoint, ignoring Key allocation attempt failed Cilium pods: [cilium-4lkgd cilium-7zjt4] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress grafana-585bb89877-sbzt4 false false prometheus-8885c5888-4czf7 false false coredns-6b4fc58d47-4748z false false testclient-gdnhh false false testclient-wtjkk false false testserver-hhgxw false false testserver-hkmcc false false Cilium agent 'cilium-4lkgd': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0 Cilium agent 'cilium-7zjt4': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 ```
### Standard Error
Click to show. ```stack-error 14:09:05 STEP: Installing Cilium 14:09:08 STEP: Waiting for Cilium to become ready 14:09:39 STEP: Validating if Kubernetes DNS is deployed 14:09:39 STEP: Checking if deployment is ready 14:09:39 STEP: Checking if kube-dns service is plumbed correctly 14:09:39 STEP: Checking if DNS can resolve 14:09:39 STEP: Checking if pods have identity 14:09:44 STEP: Kubernetes DNS is not ready: 5s timeout expired 14:09:44 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 14:09:44 STEP: Waiting for Kubernetes DNS to become operational 14:09:44 STEP: Checking if deployment is ready 14:09:45 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:09:45 STEP: Checking if deployment is ready 14:09:46 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:09:46 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-4lkgd: unable to find service backend 10.0.1.145:9153 in datapath of cilium pod cilium-4lkgd 14:09:46 STEP: Checking if deployment is ready 14:09:47 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:09:47 STEP: Checking if deployment is ready 14:09:48 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:09:48 STEP: Checking if deployment is ready 14:09:49 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:09:49 STEP: Checking if deployment is ready 14:09:50 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:09:50 STEP: Checking if deployment is ready 14:09:51 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:09:51 STEP: Checking if deployment is ready 14:09:52 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:09:52 STEP: Checking if deployment is ready 14:09:53 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:09:53 STEP: Checking if deployment is ready 14:09:54 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:09:54 STEP: Checking if deployment is ready 14:09:55 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:09:55 STEP: Checking if deployment is ready 14:09:56 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 14:09:56 STEP: Checking if deployment is ready 14:09:57 STEP: Checking if kube-dns service is plumbed correctly 14:09:57 STEP: Checking if pods have identity 14:09:57 STEP: Checking if DNS can resolve 14:10:00 STEP: Validating Cilium Installation 14:10:00 STEP: Performing Cilium controllers preflight check 14:10:00 STEP: Performing Cilium health check 14:10:00 STEP: Checking whether host EP regenerated 14:10:00 STEP: Performing Cilium status preflight check 14:10:08 STEP: Performing Cilium service preflight check 14:10:08 STEP: Performing K8s service preflight check 14:10:09 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-7zjt4': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 14:10:09 STEP: Performing Cilium controllers preflight check 14:10:09 STEP: Performing Cilium health check 14:10:09 STEP: Performing Cilium status preflight check 14:10:09 STEP: Checking whether host EP regenerated 14:10:16 STEP: Performing Cilium service preflight check 14:10:16 STEP: Performing K8s service preflight check 14:10:17 STEP: Performing Cilium controllers preflight check 14:10:17 STEP: Performing Cilium health check 14:10:17 STEP: Performing Cilium status preflight check 14:10:17 STEP: Checking whether host EP regenerated 14:10:25 STEP: Performing Cilium service preflight check 14:10:25 STEP: Performing K8s service preflight check 14:10:26 STEP: Performing Cilium controllers preflight check 14:10:26 STEP: Performing Cilium status preflight check 14:10:26 STEP: Performing Cilium health check 14:10:26 STEP: Checking whether host EP regenerated 14:10:33 STEP: Performing Cilium service preflight check 14:10:33 STEP: Performing K8s service preflight check 14:10:39 STEP: Waiting for cilium-operator to be ready 14:10:39 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 14:10:39 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 14:10:39 STEP: Making sure all endpoints are in ready state 14:10:42 STEP: Creating namespace 202303131410k8sdatapathconfighostfirewallwithvxlan 14:10:42 STEP: Deploying demo_hostfw.yaml in namespace 202303131410k8sdatapathconfighostfirewallwithvxlan 14:10:42 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 14:10:42 STEP: WaitforNPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="") 14:10:53 STEP: WaitforNPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="") => 14:10:53 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.17-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 14:11:16 STEP: Checking host policies on egress to remote node 14:11:16 STEP: Checking host policies on egress to local pod 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:11:16 STEP: Checking host policies on ingress from local pod 14:11:16 STEP: Checking host policies on ingress from remote node 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:11:16 STEP: Checking host policies on ingress from remote pod 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 14:11:16 STEP: Checking host policies on egress to remote pod 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:11:16 STEP: WaitforPods(namespace="202303131410k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => === Test Finished at 2023-03-13T14:11:22Z==== 14:11:22 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-13T14:11:21.219429329Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 14:11:22 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303131410k8sdatapathconfighostfirewallwithvxlan testclient-gdnhh 1/1 Running 0 45s 10.0.0.208 k8s1 202303131410k8sdatapathconfighostfirewallwithvxlan testclient-host-n42jw 1/1 Running 0 45s 192.168.56.12 k8s2 202303131410k8sdatapathconfighostfirewallwithvxlan testclient-host-tgpbd 1/1 Running 0 45s 192.168.56.11 k8s1 202303131410k8sdatapathconfighostfirewallwithvxlan testclient-wtjkk 1/1 Running 0 45s 10.0.1.79 k8s2 202303131410k8sdatapathconfighostfirewallwithvxlan testserver-hhgxw 2/2 Running 0 45s 10.0.0.162 k8s1 202303131410k8sdatapathconfighostfirewallwithvxlan testserver-hkmcc 2/2 Running 0 45s 10.0.1.54 k8s2 202303131410k8sdatapathconfighostfirewallwithvxlan testserver-host-hsv5p 2/2 Running 0 45s 192.168.56.12 k8s2 202303131410k8sdatapathconfighostfirewallwithvxlan testserver-host-xqrzw 2/2 Running 0 45s 192.168.56.11 k8s1 cilium-monitoring grafana-585bb89877-sbzt4 1/1 Running 0 49m 10.0.1.207 k8s2 cilium-monitoring prometheus-8885c5888-4czf7 1/1 Running 0 49m 10.0.1.95 k8s2 kube-system cilium-4lkgd 1/1 Running 0 2m19s 192.168.56.12 k8s2 kube-system cilium-7zjt4 1/1 Running 0 2m19s 192.168.56.11 k8s1 kube-system cilium-operator-869488765c-2jdbw 1/1 Running 0 2m19s 192.168.56.12 k8s2 kube-system cilium-operator-869488765c-jp5bs 1/1 Running 0 2m19s 192.168.56.11 k8s1 kube-system coredns-6b4fc58d47-4748z 1/1 Running 0 103s 10.0.1.200 k8s2 kube-system etcd-k8s1 1/1 Running 0 53m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 53m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 3 53m 192.168.56.11 k8s1 kube-system kube-proxy-5vlbh 1/1 Running 0 54m 192.168.56.11 k8s1 kube-system kube-proxy-99lzz 1/1 Running 0 50m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 3 53m 192.168.56.11 k8s1 kube-system log-gatherer-7dq4x 1/1 Running 0 49m 192.168.56.12 k8s2 kube-system log-gatherer-zsk7t 1/1 Running 0 49m 192.168.56.11 k8s1 kube-system registry-adder-88l5d 1/1 Running 0 50m 192.168.56.11 k8s1 kube-system registry-adder-tcljx 1/1 Running 0 50m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-4lkgd cilium-7zjt4] cmd: kubectl exec -n kube-system cilium-4lkgd -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.17 (v1.17.17) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-dc7e85f7) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 7/254 allocated from 10.0.1.0/24, IPv6: 7/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 42/42 healthy Proxy Status: OK, ip 10.0.1.64, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2950/65535 (4.50%), Flows/s: 25.01 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-13T14:10:39Z) Stderr: cmd: kubectl exec -n kube-system cilium-4lkgd -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 479 Disabled Disabled 1999 k8s:io.cilium.k8s.policy.cluster=default fd02::164 10.0.1.79 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303131410k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 1471 Disabled Disabled 4 reserved:health fd02::1fb 10.0.1.137 ready 2584 Disabled Disabled 48987 k8s:app=grafana fd02::1ce 10.0.1.207 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 3170 Disabled Disabled 36208 k8s:io.cilium.k8s.policy.cluster=default fd02::155 10.0.1.200 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 3297 Disabled Disabled 59764 k8s:app=prometheus fd02::1a6 10.0.1.95 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 3712 Disabled Disabled 50630 k8s:io.cilium.k8s.policy.cluster=default fd02::176 10.0.1.54 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303131410k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3873 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host Stderr: cmd: kubectl exec -n kube-system cilium-7zjt4 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.17 (v1.17.17) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-dc7e85f7) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.0.84, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 6798/65535 (10.37%), Flows/s: 53.79 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-13T14:10:39Z) Stderr: cmd: kubectl exec -n kube-system cilium-7zjt4 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 90 Disabled Disabled 4 reserved:health fd02::be 10.0.0.204 ready 598 Disabled Disabled 50630 k8s:io.cilium.k8s.policy.cluster=default fd02::b1 10.0.0.162 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303131410k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 2754 Disabled Disabled 1999 k8s:io.cilium.k8s.policy.cluster=default fd02::56 10.0.0.208 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303131410k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 3311 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host Stderr: ===================== Exiting AfterFailed ===================== 14:11:35 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 14:11:35 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 14:11:35 STEP: Deleting deployment demo_hostfw.yaml 14:11:35 STEP: Deleting namespace 202303131410k8sdatapathconfighostfirewallwithvxlan 14:11:51 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|b39c1466_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.17-kernel-4.9//1198/artifact/b39c1466_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.17-kernel-4.9//1198/artifact/test_results_Cilium-PR-K8s-1.17-kernel-4.9_1198_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.17-kernel-4.9/1198/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24331 hit this flake with 95.87% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-13T14:53:45.378045489Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-13T14:53:45.378045489Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 ⚠️ Number of "level=warning" in logs: 6 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: Unable to restore endpoint, ignoring Key allocation attempt failed Cilium pods: [cilium-hccbx cilium-mbqfs] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-kd58m false false testserver-gp6hk false false testserver-gq6hz false false coredns-bb76b858c-xgpjk false false testclient-9vlfc false false Cilium agent 'cilium-hccbx': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 Cilium agent 'cilium-mbqfs': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0 ```
### Standard Error
Click to show. ```stack-error 14:51:24 STEP: Installing Cilium 14:51:27 STEP: Waiting for Cilium to become ready 14:52:07 STEP: Validating if Kubernetes DNS is deployed 14:52:07 STEP: Checking if deployment is ready 14:52:07 STEP: Checking if kube-dns service is plumbed correctly 14:52:07 STEP: Checking if pods have identity 14:52:07 STEP: Checking if DNS can resolve 14:52:11 STEP: Kubernetes DNS is up and operational 14:52:11 STEP: Validating Cilium Installation 14:52:11 STEP: Performing Cilium controllers preflight check 14:52:11 STEP: Performing Cilium health check 14:52:11 STEP: Checking whether host EP regenerated 14:52:11 STEP: Performing Cilium status preflight check 14:52:18 STEP: Performing Cilium service preflight check 14:52:18 STEP: Performing K8s service preflight check 14:52:18 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-hccbx': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 14:52:18 STEP: Performing Cilium controllers preflight check 14:52:18 STEP: Performing Cilium status preflight check 14:52:18 STEP: Performing Cilium health check 14:52:18 STEP: Checking whether host EP regenerated 14:52:26 STEP: Performing Cilium service preflight check 14:52:26 STEP: Performing K8s service preflight check 14:52:26 STEP: Performing Cilium status preflight check 14:52:26 STEP: Performing Cilium health check 14:52:26 STEP: Checking whether host EP regenerated 14:52:26 STEP: Performing Cilium controllers preflight check 14:52:34 STEP: Performing Cilium service preflight check 14:52:34 STEP: Performing K8s service preflight check 14:52:34 STEP: Performing Cilium controllers preflight check 14:52:34 STEP: Performing Cilium status preflight check 14:52:34 STEP: Performing Cilium health check 14:52:34 STEP: Checking whether host EP regenerated 14:52:41 STEP: Performing Cilium service preflight check 14:52:41 STEP: Performing K8s service preflight check 14:52:41 STEP: Performing Cilium controllers preflight check 14:52:41 STEP: Performing Cilium status preflight check 14:52:41 STEP: Performing Cilium health check 14:52:41 STEP: Checking whether host EP regenerated 14:52:49 STEP: Performing Cilium service preflight check 14:52:49 STEP: Performing K8s service preflight check 14:52:49 STEP: Performing Cilium controllers preflight check 14:52:49 STEP: Performing Cilium health check 14:52:49 STEP: Checking whether host EP regenerated 14:52:49 STEP: Performing Cilium status preflight check 14:52:57 STEP: Performing Cilium service preflight check 14:52:57 STEP: Performing K8s service preflight check 14:53:02 STEP: Waiting for cilium-operator to be ready 14:53:02 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 14:53:03 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 14:53:03 STEP: Making sure all endpoints are in ready state 14:53:05 STEP: Creating namespace 202303131453k8sdatapathconfighostfirewallwithvxlan 14:53:05 STEP: Deploying demo_hostfw.yaml in namespace 202303131453k8sdatapathconfighostfirewallwithvxlan 14:53:06 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 14:53:06 STEP: WaitforNPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="") 14:53:18 STEP: WaitforNPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="") => 14:53:18 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 14:53:46 STEP: Checking host policies on egress to remote node 14:53:46 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:53:46 STEP: Checking host policies on egress to local pod 14:53:46 STEP: Checking host policies on egress to remote pod 14:53:46 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:53:46 STEP: Checking host policies on ingress from remote pod 14:53:46 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 14:53:46 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:53:46 STEP: Checking host policies on ingress from remote node 14:53:46 STEP: Checking host policies on ingress from local pod 14:53:46 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:53:46 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 14:53:46 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 14:53:46 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:53:46 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:53:46 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:53:46 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 14:53:46 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:53:47 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 14:53:47 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 14:53:47 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:53:47 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:53:47 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:53:47 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:53:47 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 14:53:47 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:53:47 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:53:47 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 14:53:47 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:53:47 STEP: WaitforPods(namespace="202303131453k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-03-13T14:53:52Z==== 14:53:52 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-13T14:53:45.378045489Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 14:53:52 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303131453k8sdatapathconfighostfirewallwithvxlan testclient-9vlfc 1/1 Running 0 51s 10.0.1.54 k8s1 202303131453k8sdatapathconfighostfirewallwithvxlan testclient-host-cbgmk 1/1 Running 0 51s 192.168.56.12 k8s2 202303131453k8sdatapathconfighostfirewallwithvxlan testclient-host-kfhth 1/1 Running 0 51s 192.168.56.11 k8s1 202303131453k8sdatapathconfighostfirewallwithvxlan testclient-kd58m 1/1 Running 0 51s 10.0.0.114 k8s2 202303131453k8sdatapathconfighostfirewallwithvxlan testserver-gp6hk 2/2 Running 0 51s 10.0.0.216 k8s2 202303131453k8sdatapathconfighostfirewallwithvxlan testserver-gq6hz 2/2 Running 0 51s 10.0.1.228 k8s1 202303131453k8sdatapathconfighostfirewallwithvxlan testserver-host-5rwmq 2/2 Running 0 51s 192.168.56.11 k8s1 202303131453k8sdatapathconfighostfirewallwithvxlan testserver-host-b98mv 2/2 Running 0 51s 192.168.56.12 k8s2 cilium-monitoring grafana-7ddfc74b5b-c85m7 0/1 Running 0 65m 10.0.1.215 k8s2 cilium-monitoring prometheus-669755c8c5-kmsrz 1/1 Running 0 65m 10.0.1.44 k8s2 kube-system cilium-hccbx 1/1 Running 0 2m30s 192.168.56.11 k8s1 kube-system cilium-mbqfs 1/1 Running 0 2m30s 192.168.56.12 k8s2 kube-system cilium-operator-5cd8b8f96-fp4fx 1/1 Running 0 2m30s 192.168.56.11 k8s1 kube-system cilium-operator-5cd8b8f96-v9ndn 1/1 Running 0 2m30s 192.168.56.12 k8s2 kube-system coredns-bb76b858c-xgpjk 1/1 Running 0 7m29s 10.0.0.219 k8s2 kube-system etcd-k8s1 1/1 Running 0 69m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 69m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 6 69m 192.168.56.11 k8s1 kube-system kube-proxy-v22t4 1/1 Running 0 65m 192.168.56.12 k8s2 kube-system kube-proxy-wsxmc 1/1 Running 0 69m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 6 69m 192.168.56.11 k8s1 kube-system log-gatherer-j66xm 1/1 Running 0 65m 192.168.56.11 k8s1 kube-system log-gatherer-wjv77 1/1 Running 0 65m 192.168.56.12 k8s2 kube-system registry-adder-bff87 1/1 Running 0 65m 192.168.56.12 k8s2 kube-system registry-adder-xq82h 1/1 Running 0 65m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-hccbx cilium-mbqfs] cmd: kubectl exec -n kube-system cilium-hccbx -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.19 (v1.19.16) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-dc7e85f7) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.1.208, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 6500/65535 (9.92%), Flows/s: 47.63 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-13T14:52:56Z) Stderr: cmd: kubectl exec -n kube-system cilium-hccbx -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 1554 Disabled Disabled 4 reserved:health fd02::14a 10.0.1.30 ready 1879 Disabled Disabled 8747 k8s:io.cilium.k8s.policy.cluster=default fd02::1a5 10.0.1.228 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303131453k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 2562 Disabled Disabled 37404 k8s:io.cilium.k8s.policy.cluster=default fd02::1ad 10.0.1.54 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303131453k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 4086 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host Stderr: cmd: kubectl exec -n kube-system cilium-mbqfs -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.19 (v1.19.16) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-dc7e85f7) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 33/33 healthy Proxy Status: OK, ip 10.0.0.144, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 3133/65535 (4.78%), Flows/s: 22.79 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-13T14:53:51Z) Stderr: cmd: kubectl exec -n kube-system cilium-mbqfs -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 221 Disabled Disabled 37404 k8s:io.cilium.k8s.policy.cluster=default fd02::69 10.0.0.114 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303131453k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 1050 Disabled Disabled 42237 k8s:io.cilium.k8s.policy.cluster=default fd02::56 10.0.0.219 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 2167 Disabled Disabled 4 reserved:health fd02::28 10.0.0.115 ready 2548 Disabled Disabled 8747 k8s:io.cilium.k8s.policy.cluster=default fd02::9b 10.0.0.216 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303131453k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3416 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host Stderr: ===================== Exiting AfterFailed ===================== 14:54:55 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 14:54:55 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 14:54:55 STEP: Deleting deployment demo_hostfw.yaml 14:54:55 STEP: Deleting namespace 202303131453k8sdatapathconfighostfirewallwithvxlan 14:55:10 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|098204a0_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//2706/artifact/098204a0_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//2706/artifact/test_results_Cilium-PR-K8s-1.19-kernel-4.9_2706_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9/2706/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24331 hit this flake with 95.87% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.18-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-13T14:41:39.091881028Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.18-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-13T14:41:39.091881028Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-9l2t8 cilium-rnkcv] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testserver-jz7lv false false testserver-lz8l7 false false coredns-66585574f-v2pr7 false false testclient-gfp79 false false testclient-nvxrx false false Cilium agent 'cilium-9l2t8': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 Cilium agent 'cilium-rnkcv': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0 ```
### Standard Error
Click to show. ```stack-error 14:39:15 STEP: Installing Cilium 14:39:18 STEP: Waiting for Cilium to become ready 14:39:56 STEP: Validating if Kubernetes DNS is deployed 14:39:56 STEP: Checking if deployment is ready 14:39:56 STEP: Checking if kube-dns service is plumbed correctly 14:39:56 STEP: Checking if pods have identity 14:39:56 STEP: Checking if DNS can resolve 14:40:00 STEP: Kubernetes DNS is up and operational 14:40:00 STEP: Validating Cilium Installation 14:40:00 STEP: Performing Cilium controllers preflight check 14:40:00 STEP: Performing Cilium health check 14:40:00 STEP: Checking whether host EP regenerated 14:40:00 STEP: Performing Cilium status preflight check 14:40:08 STEP: Performing Cilium service preflight check 14:40:08 STEP: Performing K8s service preflight check 14:40:08 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-9l2t8': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 14:40:08 STEP: Performing Cilium controllers preflight check 14:40:08 STEP: Performing Cilium status preflight check 14:40:08 STEP: Performing Cilium health check 14:40:08 STEP: Checking whether host EP regenerated 14:40:15 STEP: Performing Cilium service preflight check 14:40:15 STEP: Performing K8s service preflight check 14:40:15 STEP: Performing Cilium controllers preflight check 14:40:15 STEP: Performing Cilium status preflight check 14:40:15 STEP: Performing Cilium health check 14:40:15 STEP: Checking whether host EP regenerated 14:40:23 STEP: Performing Cilium service preflight check 14:40:23 STEP: Performing K8s service preflight check 14:40:23 STEP: Performing Cilium controllers preflight check 14:40:23 STEP: Performing Cilium status preflight check 14:40:23 STEP: Performing Cilium health check 14:40:23 STEP: Checking whether host EP regenerated 14:40:30 STEP: Performing Cilium service preflight check 14:40:30 STEP: Performing K8s service preflight check 14:40:30 STEP: Performing Cilium controllers preflight check 14:40:30 STEP: Performing Cilium status preflight check 14:40:30 STEP: Performing Cilium health check 14:40:30 STEP: Checking whether host EP regenerated 14:40:37 STEP: Performing Cilium service preflight check 14:40:37 STEP: Performing K8s service preflight check 14:40:37 STEP: Performing Cilium controllers preflight check 14:40:37 STEP: Performing Cilium status preflight check 14:40:37 STEP: Performing Cilium health check 14:40:37 STEP: Checking whether host EP regenerated 14:40:45 STEP: Performing Cilium service preflight check 14:40:45 STEP: Performing K8s service preflight check 14:40:45 STEP: Performing Cilium controllers preflight check 14:40:45 STEP: Performing Cilium status preflight check 14:40:45 STEP: Performing Cilium health check 14:40:45 STEP: Checking whether host EP regenerated 14:40:52 STEP: Performing Cilium service preflight check 14:40:52 STEP: Performing K8s service preflight check 14:40:58 STEP: Waiting for cilium-operator to be ready 14:40:58 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 14:40:58 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 14:40:58 STEP: Making sure all endpoints are in ready state 14:41:01 STEP: Creating namespace 202303131441k8sdatapathconfighostfirewallwithvxlan 14:41:01 STEP: Deploying demo_hostfw.yaml in namespace 202303131441k8sdatapathconfighostfirewallwithvxlan 14:41:01 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 14:41:01 STEP: WaitforNPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="") 14:41:12 STEP: WaitforNPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="") => 14:41:12 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.18-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 14:41:34 STEP: Checking host policies on egress to remote node 14:41:34 STEP: Checking host policies on egress to local pod 14:41:34 STEP: Checking host policies on ingress from remote pod 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 14:41:34 STEP: Checking host policies on ingress from local pod 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:41:34 STEP: Checking host policies on ingress from remote node 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:41:34 STEP: Checking host policies on egress to remote pod 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 14:41:34 STEP: WaitforPods(namespace="202303131441k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-03-13T14:41:40Z==== 14:41:40 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-13T14:41:39.091881028Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 14:41:40 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303131441k8sdatapathconfighostfirewallwithvxlan testclient-gfp79 1/1 Running 0 44s 10.0.0.107 k8s2 202303131441k8sdatapathconfighostfirewallwithvxlan testclient-host-cnr22 1/1 Running 0 44s 192.168.56.12 k8s2 202303131441k8sdatapathconfighostfirewallwithvxlan testclient-host-mqbql 1/1 Running 0 44s 192.168.56.11 k8s1 202303131441k8sdatapathconfighostfirewallwithvxlan testclient-nvxrx 1/1 Running 0 44s 10.0.1.201 k8s1 202303131441k8sdatapathconfighostfirewallwithvxlan testserver-host-jpz5b 2/2 Running 0 44s 192.168.56.11 k8s1 202303131441k8sdatapathconfighostfirewallwithvxlan testserver-host-wlbwc 2/2 Running 0 44s 192.168.56.12 k8s2 202303131441k8sdatapathconfighostfirewallwithvxlan testserver-jz7lv 2/2 Running 0 44s 10.0.0.204 k8s2 202303131441k8sdatapathconfighostfirewallwithvxlan testserver-lz8l7 2/2 Running 0 44s 10.0.1.46 k8s1 cilium-monitoring grafana-677f4bb779-4kr79 0/1 Running 0 46m 10.0.1.141 k8s2 cilium-monitoring prometheus-579ff57bbb-qnpm9 1/1 Running 0 46m 10.0.1.200 k8s2 kube-system cilium-9l2t8 1/1 Running 0 2m27s 192.168.56.11 k8s1 kube-system cilium-operator-6b8cdbc79f-2skgt 1/1 Running 0 2m27s 192.168.56.12 k8s2 kube-system cilium-operator-6b8cdbc79f-4lfn4 1/1 Running 0 2m27s 192.168.56.11 k8s1 kube-system cilium-rnkcv 1/1 Running 0 2m27s 192.168.56.12 k8s2 kube-system coredns-66585574f-v2pr7 1/1 Running 0 8m9s 10.0.0.61 k8s2 kube-system etcd-k8s1 1/1 Running 0 50m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 50m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 4 50m 192.168.56.11 k8s1 kube-system kube-proxy-fbjjk 1/1 Running 0 48m 192.168.56.11 k8s1 kube-system kube-proxy-wjkct 1/1 Running 0 47m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 4 50m 192.168.56.11 k8s1 kube-system log-gatherer-59fcp 1/1 Running 0 46m 192.168.56.11 k8s1 kube-system log-gatherer-z28v5 1/1 Running 0 46m 192.168.56.12 k8s2 kube-system registry-adder-6kztt 1/1 Running 0 47m 192.168.56.11 k8s1 kube-system registry-adder-srmrz 1/1 Running 0 47m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-9l2t8 cilium-rnkcv] cmd: kubectl exec -n kube-system cilium-9l2t8 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.18 (v1.18.20) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-dc7e85f7) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.1.213, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 7201/65535 (10.99%), Flows/s: 54.91 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-13T14:40:52Z) Stderr: cmd: kubectl exec -n kube-system cilium-9l2t8 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 123 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 274 Disabled Disabled 28039 k8s:io.cilium.k8s.policy.cluster=default fd02::18b 10.0.1.201 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303131441k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 1544 Disabled Disabled 14110 k8s:io.cilium.k8s.policy.cluster=default fd02::132 10.0.1.46 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303131441k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1547 Disabled Disabled 4 reserved:health fd02::191 10.0.1.112 ready Stderr: cmd: kubectl exec -n kube-system cilium-rnkcv -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.18 (v1.18.20) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-dc7e85f7) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 33/33 healthy Proxy Status: OK, ip 10.0.0.138, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 3195/65535 (4.88%), Flows/s: 25.07 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-13T14:40:58Z) Stderr: cmd: kubectl exec -n kube-system cilium-rnkcv -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 798 Disabled Disabled 25068 k8s:io.cilium.k8s.policy.cluster=default fd02::75 10.0.0.61 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1517 Disabled Disabled 14110 k8s:io.cilium.k8s.policy.cluster=default fd02::5a 10.0.0.204 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303131441k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1682 Disabled Disabled 28039 k8s:io.cilium.k8s.policy.cluster=default fd02::de 10.0.0.107 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303131441k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2326 Disabled Disabled 4 reserved:health fd02::df 10.0.0.222 ready 4075 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host Stderr: ===================== Exiting AfterFailed ===================== 14:41:53 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 14:41:53 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 14:41:53 STEP: Deleting deployment demo_hostfw.yaml 14:41:53 STEP: Deleting namespace 202303131441k8sdatapathconfighostfirewallwithvxlan 14:42:08 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|5ef2ff49_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2549/artifact/5ef2ff49_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2549/artifact/test_results_Cilium-PR-K8s-1.18-kernel-4.9_2549_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9/2549/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24311 hit this flake with 95.87% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-15T01:43:42.349969521Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-15T01:43:42.349969521Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: Interrupt received error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-8hhfp cilium-bth92] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testserver-p744c false false grafana-7ddfc74b5b-dz5jp false false prometheus-669755c8c5-6hsvg false false coredns-bb76b858c-698x9 false false testclient-kdqkj false false testclient-knzd5 false false testserver-l57hv false false Cilium agent 'cilium-8hhfp': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 43 Failed 0 Cilium agent 'cilium-bth92': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 ```
### Standard Error
Click to show. ```stack-error 01:40:35 STEP: Installing Cilium 01:40:38 STEP: Waiting for Cilium to become ready 01:42:40 STEP: Validating if Kubernetes DNS is deployed 01:42:40 STEP: Checking if deployment is ready 01:42:40 STEP: Checking if kube-dns service is plumbed correctly 01:42:40 STEP: Checking if pods have identity 01:42:40 STEP: Checking if DNS can resolve 01:42:43 STEP: Kubernetes DNS is up and operational 01:42:43 STEP: Validating Cilium Installation 01:42:43 STEP: Performing Cilium status preflight check 01:42:43 STEP: Performing Cilium health check 01:42:43 STEP: Performing Cilium controllers preflight check 01:42:43 STEP: Checking whether host EP regenerated 01:42:51 STEP: Performing Cilium service preflight check 01:42:51 STEP: Performing K8s service preflight check 01:42:57 STEP: Waiting for cilium-operator to be ready 01:42:57 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 01:42:57 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 01:42:57 STEP: Making sure all endpoints are in ready state 01:43:00 STEP: Creating namespace 202303150143k8sdatapathconfighostfirewallwithvxlan 01:43:00 STEP: Deploying demo_hostfw.yaml in namespace 202303150143k8sdatapathconfighostfirewallwithvxlan 01:43:00 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 01:43:00 STEP: WaitforNPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="") 01:43:12 STEP: WaitforNPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="") => 01:43:12 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 01:43:28 STEP: Checking host policies on ingress from local pod 01:43:28 STEP: Checking host policies on egress to local pod 01:43:28 STEP: Checking host policies on egress to remote pod 01:43:28 STEP: Checking host policies on ingress from remote node 01:43:28 STEP: Checking host policies on ingress from remote pod 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 01:43:28 STEP: Checking host policies on egress to remote node 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 01:43:28 STEP: WaitforPods(namespace="202303150143k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-03-15T01:43:34Z==== 01:43:34 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-15T01:43:42.349969521Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 01:43:44 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303150143k8sdatapathconfighostfirewallwithvxlan testclient-host-5tnb9 1/1 Running 0 49s 192.168.56.11 k8s1 202303150143k8sdatapathconfighostfirewallwithvxlan testclient-host-cmppg 1/1 Running 0 49s 192.168.56.12 k8s2 202303150143k8sdatapathconfighostfirewallwithvxlan testclient-kdqkj 1/1 Running 0 49s 10.0.0.239 k8s2 202303150143k8sdatapathconfighostfirewallwithvxlan testclient-knzd5 1/1 Running 0 49s 10.0.1.120 k8s1 202303150143k8sdatapathconfighostfirewallwithvxlan testserver-host-gqpnr 2/2 Running 0 49s 192.168.56.11 k8s1 202303150143k8sdatapathconfighostfirewallwithvxlan testserver-host-kgv7l 2/2 Running 0 49s 192.168.56.12 k8s2 202303150143k8sdatapathconfighostfirewallwithvxlan testserver-l57hv 2/2 Running 0 49s 10.0.1.24 k8s1 202303150143k8sdatapathconfighostfirewallwithvxlan testserver-p744c 2/2 Running 0 49s 10.0.0.22 k8s2 cilium-monitoring grafana-7ddfc74b5b-dz5jp 1/1 Running 0 23m 10.0.0.49 k8s2 cilium-monitoring prometheus-669755c8c5-6hsvg 1/1 Running 0 23m 10.0.0.123 k8s2 kube-system cilium-8hhfp 1/1 Running 0 3m11s 192.168.56.12 k8s2 kube-system cilium-bth92 1/1 Running 0 3m11s 192.168.56.11 k8s1 kube-system cilium-operator-674795f758-6fsw6 1/1 Running 0 3m11s 192.168.56.12 k8s2 kube-system cilium-operator-674795f758-zjv2k 1/1 Running 0 3m11s 192.168.56.11 k8s1 kube-system coredns-bb76b858c-698x9 1/1 Running 0 7m8s 10.0.0.32 k8s2 kube-system etcd-k8s1 1/1 Running 0 27m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 27m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 2 27m 192.168.56.11 k8s1 kube-system kube-proxy-4rq7d 1/1 Running 0 27m 192.168.56.11 k8s1 kube-system kube-proxy-hgjjk 1/1 Running 0 23m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 2 27m 192.168.56.11 k8s1 kube-system log-gatherer-m22c8 1/1 Running 0 23m 192.168.56.11 k8s1 kube-system log-gatherer-qm9bw 1/1 Running 0 23m 192.168.56.12 k8s2 kube-system registry-adder-cgzss 1/1 Running 0 23m 192.168.56.12 k8s2 kube-system registry-adder-rc676 1/1 Running 0 23m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-8hhfp cilium-bth92] cmd: kubectl exec -n kube-system cilium-8hhfp -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.20 (v1.20.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-52de5d2e) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 43/43 healthy Proxy Status: OK, ip 10.0.0.86, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 1976/65535 (3.02%), Flows/s: 22.56 Metrics: Disabled Encryption: Disabled Cluster health: 1/2 reachable (2023-03-15T01:43:27Z) Name IP Node Endpoints k8s2 (localhost) 192.168.56.12 unknown unreachable Stderr: cmd: kubectl exec -n kube-system cilium-8hhfp -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 279 Disabled Disabled 40881 k8s:io.cilium.k8s.policy.cluster=default fd02::dc 10.0.0.22 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303150143k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 544 Disabled Disabled 5285 k8s:io.cilium.k8s.policy.cluster=default fd02::31 10.0.0.239 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303150143k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 577 Disabled Disabled 612 k8s:io.cilium.k8s.policy.cluster=default fd02::98 10.0.0.32 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 628 Disabled Disabled 22631 k8s:app=grafana fd02::4c 10.0.0.49 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 1505 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 1518 Disabled Disabled 18721 k8s:app=prometheus fd02::73 10.0.0.123 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 1975 Disabled Disabled 4 reserved:health fd02::ad 10.0.0.155 ready Stderr: cmd: kubectl exec -n kube-system cilium-bth92 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.20 (v1.20.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-52de5d2e) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.1.131, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 4820/65535 (7.35%), Flows/s: 26.32 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-15T01:42:57Z) Stderr: cmd: kubectl exec -n kube-system cilium-bth92 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 280 Disabled Disabled 40881 k8s:io.cilium.k8s.policy.cluster=default fd02::128 10.0.1.24 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303150143k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 450 Disabled Disabled 4 reserved:health fd02::1f9 10.0.1.128 ready 1232 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/control-plane k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 2143 Disabled Disabled 5285 k8s:io.cilium.k8s.policy.cluster=default fd02::1be 10.0.1.120 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303150143k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient Stderr: ===================== Exiting AfterFailed ===================== 01:43:57 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 01:43:57 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 01:43:57 STEP: Deleting deployment demo_hostfw.yaml 01:43:57 STEP: Deleting namespace 202303150143k8sdatapathconfighostfirewallwithvxlan 01:44:13 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|b30d2ae3_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9//1712/artifact/b30d2ae3_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9//1712/artifact/test_results_Cilium-PR-K8s-1.20-kernel-4.9_1712_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9/1712/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24387 hit this flake with 97.53% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-15T17:21:34.779814546Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-15T17:21:34.779814546Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 0 ⚠️ Number of "level=warning" in logs: 6 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: Unable to restore endpoint, ignoring Key allocation attempt failed Cilium pods: [cilium-5sqrm cilium-nqrvj] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-njkd6 false false testserver-7bj8c false false testserver-wc2m8 false false grafana-585bb89877-m62d4 false false prometheus-8885c5888-hhbj4 false false coredns-758664cbbf-q9g9g false false testclient-57xt6 false false Cilium agent 'cilium-5sqrm': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0 Cilium agent 'cilium-nqrvj': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0 ```
### Standard Error
Click to show. ```stack-error 17:18:41 STEP: Installing Cilium 17:18:44 STEP: Waiting for Cilium to become ready 17:20:35 STEP: Validating if Kubernetes DNS is deployed 17:20:35 STEP: Checking if deployment is ready 17:20:35 STEP: Checking if kube-dns service is plumbed correctly 17:20:35 STEP: Checking if DNS can resolve 17:20:35 STEP: Checking if pods have identity 17:20:38 STEP: Kubernetes DNS is up and operational 17:20:38 STEP: Validating Cilium Installation 17:20:38 STEP: Performing Cilium controllers preflight check 17:20:38 STEP: Performing Cilium health check 17:20:38 STEP: Performing Cilium status preflight check 17:20:38 STEP: Checking whether host EP regenerated 17:20:46 STEP: Performing Cilium service preflight check 17:20:46 STEP: Performing K8s service preflight check 17:20:52 STEP: Waiting for cilium-operator to be ready 17:20:52 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 17:20:52 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 17:20:52 STEP: Making sure all endpoints are in ready state 17:20:55 STEP: Creating namespace 202303151720k8sdatapathconfighostfirewallwithvxlan 17:20:55 STEP: Deploying demo_hostfw.yaml in namespace 202303151720k8sdatapathconfighostfirewallwithvxlan 17:20:55 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 17:20:55 STEP: WaitforNPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="") 17:21:07 STEP: WaitforNPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="") => 17:21:07 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 17:21:32 STEP: Checking host policies on ingress from local pod 17:21:32 STEP: Checking host policies on egress to remote node 17:21:32 STEP: Checking host policies on ingress from remote pod 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 17:21:32 STEP: Checking host policies on egress to remote pod 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 17:21:32 STEP: Checking host policies on egress to local pod 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:21:32 STEP: Checking host policies on ingress from remote node 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 17:21:32 STEP: WaitforPods(namespace="202303151720k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => === Test Finished at 2023-03-15T17:21:38Z==== 17:21:38 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-15T17:21:34.779814546Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 17:21:38 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303151720k8sdatapathconfighostfirewallwithvxlan testclient-57xt6 1/1 Running 0 48s 10.0.1.154 k8s2 202303151720k8sdatapathconfighostfirewallwithvxlan testclient-host-dqqpk 1/1 Running 0 48s 192.168.56.11 k8s1 202303151720k8sdatapathconfighostfirewallwithvxlan testclient-host-r9zv2 1/1 Running 0 48s 192.168.56.12 k8s2 202303151720k8sdatapathconfighostfirewallwithvxlan testclient-njkd6 1/1 Running 0 48s 10.0.0.113 k8s1 202303151720k8sdatapathconfighostfirewallwithvxlan testserver-7bj8c 2/2 Running 0 48s 10.0.1.189 k8s2 202303151720k8sdatapathconfighostfirewallwithvxlan testserver-host-d94dq 2/2 Running 0 48s 192.168.56.11 k8s1 202303151720k8sdatapathconfighostfirewallwithvxlan testserver-host-jqhll 2/2 Running 0 48s 192.168.56.12 k8s2 202303151720k8sdatapathconfighostfirewallwithvxlan testserver-wc2m8 2/2 Running 0 48s 10.0.0.3 k8s1 cilium-monitoring grafana-585bb89877-m62d4 1/1 Running 0 29m 10.0.0.245 k8s1 cilium-monitoring prometheus-8885c5888-hhbj4 1/1 Running 0 29m 10.0.0.184 k8s1 kube-system cilium-5sqrm 1/1 Running 0 2m59s 192.168.56.11 k8s1 kube-system cilium-nqrvj 1/1 Running 0 2m59s 192.168.56.12 k8s2 kube-system cilium-operator-676d5f7c67-6bczf 1/1 Running 0 2m59s 192.168.56.11 k8s1 kube-system cilium-operator-676d5f7c67-nxxwk 1/1 Running 0 2m59s 192.168.56.12 k8s2 kube-system coredns-758664cbbf-q9g9g 1/1 Running 0 7m11s 10.0.1.45 k8s2 kube-system etcd-k8s1 1/1 Running 0 32m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 33m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 2 33m 192.168.56.11 k8s1 kube-system kube-proxy-b6nbg 1/1 Running 0 30m 192.168.56.12 k8s2 kube-system kube-proxy-qlfjk 1/1 Running 0 33m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 2 33m 192.168.56.11 k8s1 kube-system log-gatherer-bcckq 1/1 Running 0 29m 192.168.56.12 k8s2 kube-system log-gatherer-nf2gd 1/1 Running 0 29m 192.168.56.11 k8s1 kube-system registry-adder-5gxzw 1/1 Running 0 30m 192.168.56.11 k8s1 kube-system registry-adder-8rnwz 1/1 Running 0 30m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-5sqrm cilium-nqrvj] cmd: kubectl exec -n kube-system cilium-5sqrm -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-3b3709a4) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 6/254 allocated from 10.0.0.0/24, IPv6: 6/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 38/38 healthy Proxy Status: OK, ip 10.0.0.181, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 3812/65535 (5.82%), Flows/s: 47.09 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-15T17:21:29Z) Stderr: cmd: kubectl exec -n kube-system cilium-5sqrm -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 192 Disabled Disabled 9868 k8s:app=grafana fd02::48 10.0.0.245 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 847 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 1100 Disabled Disabled 13270 k8s:app=prometheus fd02::22 10.0.0.184 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 1292 Disabled Disabled 4 reserved:health fd02::52 10.0.0.142 ready 1805 Disabled Disabled 6001 k8s:io.cilium.k8s.policy.cluster=default fd02::ec 10.0.0.3 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303151720k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1825 Disabled Disabled 30582 k8s:io.cilium.k8s.policy.cluster=default fd02::68 10.0.0.113 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303151720k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient Stderr: cmd: kubectl exec -n kube-system cilium-nqrvj -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.0 (v1.13.0-3b3709a4) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 33/33 healthy Proxy Status: OK, ip 10.0.1.112, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 1953/65535 (2.98%), Flows/s: 14.44 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-15T17:20:52Z) Stderr: cmd: kubectl exec -n kube-system cilium-nqrvj -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 82 Disabled Disabled 6001 k8s:io.cilium.k8s.policy.cluster=default fd02::1ca 10.0.1.189 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303151720k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 2122 Disabled Disabled 6609 k8s:io.cilium.k8s.policy.cluster=default fd02::1f4 10.0.1.45 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 2971 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 3167 Disabled Disabled 30582 k8s:io.cilium.k8s.policy.cluster=default fd02::11d 10.0.1.154 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303151720k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 3788 Disabled Disabled 4 reserved:health fd02::198 10.0.1.16 ready Stderr: ===================== Exiting AfterFailed ===================== 17:21:51 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 17:21:51 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 17:21:51 STEP: Deleting deployment demo_hostfw.yaml 17:21:51 STEP: Deleting namespace 202303151720k8sdatapathconfighostfirewallwithvxlan 17:22:07 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|292ae870_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//4192/artifact/292ae870_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//4192/artifact/93f41bfc_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//4192/artifact/test_results_Cilium-PR-K8s-1.16-kernel-4.9_4192_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9/4192/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24445 hit this flake with 96.99% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-29T22:28:10.100318115Z level=error msg="Failed to release lock: Put \"https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock?timeout=5s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" subsys=klog /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-29T22:28:10.100318115Z level=error msg=\"Failed to release lock: Put \\\"https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock?timeout=5s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\" subsys=klog" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 3 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Failed to release lock: Put \ Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 0 ⚠️ Number of "level=warning" in logs: 6 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: Unable to restore endpoint, ignoring UpdateIdentities: Skipping Delete of a non-existing identity Cilium pods: [cilium-rkhsd cilium-wwp7p] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testserver-877bn false false grafana-585bb89877-9q6nf false false prometheus-8885c5888-gv7cp false false coredns-758664cbbf-jpx5k false false testclient-hc7bj false false testclient-jpnm6 false false testserver-554dg false false Cilium agent 'cilium-rkhsd': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 43 Failed 0 Cilium agent 'cilium-wwp7p': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 ```
### Standard Error
Click to show. ```stack-error 22:25:26 STEP: Installing Cilium 22:25:29 STEP: Waiting for Cilium to become ready 22:27:18 STEP: Validating if Kubernetes DNS is deployed 22:27:18 STEP: Checking if deployment is ready 22:27:18 STEP: Checking if kube-dns service is plumbed correctly 22:27:18 STEP: Checking if pods have identity 22:27:18 STEP: Checking if DNS can resolve 22:27:22 STEP: Kubernetes DNS is up and operational 22:27:22 STEP: Validating Cilium Installation 22:27:22 STEP: Performing Cilium controllers preflight check 22:27:22 STEP: Performing Cilium status preflight check 22:27:22 STEP: Checking whether host EP regenerated 22:27:22 STEP: Performing Cilium health check 22:27:29 STEP: Performing Cilium service preflight check 22:27:29 STEP: Performing K8s service preflight check 22:27:35 STEP: Waiting for cilium-operator to be ready 22:27:35 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 22:27:35 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 22:27:35 STEP: Making sure all endpoints are in ready state 22:27:38 STEP: Creating namespace 202303292227k8sdatapathconfighostfirewallwithvxlan 22:27:38 STEP: Deploying demo_hostfw.yaml in namespace 202303292227k8sdatapathconfighostfirewallwithvxlan 22:27:38 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 22:27:38 STEP: WaitforNPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="") 22:27:51 STEP: WaitforNPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="") => 22:27:51 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 22:28:14 STEP: Checking host policies on egress to remote node 22:28:14 STEP: Checking host policies on egress to local pod 22:28:14 STEP: Checking host policies on egress to remote pod 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 22:28:14 STEP: Checking host policies on ingress from remote pod 22:28:14 STEP: Checking host policies on ingress from local pod 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 22:28:14 STEP: Checking host policies on ingress from remote node 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 22:28:14 STEP: WaitforPods(namespace="202303292227k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => === Test Finished at 2023-03-29T22:28:20Z==== 22:28:20 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-29T22:28:10.100318115Z level=error msg="Failed to release lock: Put \"https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock?timeout=5s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" subsys=klog ===================== TEST FAILED ===================== 22:28:20 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303292227k8sdatapathconfighostfirewallwithvxlan testclient-hc7bj 1/1 Running 0 47s 10.0.1.115 k8s1 202303292227k8sdatapathconfighostfirewallwithvxlan testclient-host-hnwld 1/1 Running 0 47s 192.168.56.11 k8s1 202303292227k8sdatapathconfighostfirewallwithvxlan testclient-host-sbvq7 1/1 Running 0 47s 192.168.56.12 k8s2 202303292227k8sdatapathconfighostfirewallwithvxlan testclient-jpnm6 1/1 Running 0 47s 10.0.0.62 k8s2 202303292227k8sdatapathconfighostfirewallwithvxlan testserver-554dg 2/2 Running 0 47s 10.0.1.198 k8s1 202303292227k8sdatapathconfighostfirewallwithvxlan testserver-877bn 2/2 Running 0 47s 10.0.0.180 k8s2 202303292227k8sdatapathconfighostfirewallwithvxlan testserver-host-4sr97 2/2 Running 0 47s 192.168.56.11 k8s1 202303292227k8sdatapathconfighostfirewallwithvxlan testserver-host-vvvwd 2/2 Running 0 47s 192.168.56.12 k8s2 cilium-monitoring grafana-585bb89877-9q6nf 1/1 Running 0 37m 10.0.0.165 k8s2 cilium-monitoring prometheus-8885c5888-gv7cp 1/1 Running 0 37m 10.0.0.242 k8s2 kube-system cilium-operator-6f45b594dd-bhnq8 0/1 Running 1 2m56s 192.168.56.12 k8s2 kube-system cilium-operator-6f45b594dd-qsmq9 1/1 Running 0 2m56s 192.168.56.11 k8s1 kube-system cilium-rkhsd 1/1 Running 0 2m56s 192.168.56.12 k8s2 kube-system cilium-wwp7p 1/1 Running 0 2m56s 192.168.56.11 k8s1 kube-system coredns-758664cbbf-jpx5k 1/1 Running 0 36m 10.0.0.110 k8s2 kube-system etcd-k8s1 1/1 Running 0 41m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 41m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 0 41m 192.168.56.11 k8s1 kube-system kube-proxy-77gth 1/1 Running 0 38m 192.168.56.12 k8s2 kube-system kube-proxy-mgmsj 1/1 Running 0 42m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 0 41m 192.168.56.11 k8s1 kube-system log-gatherer-275cx 1/1 Running 0 37m 192.168.56.11 k8s1 kube-system log-gatherer-9h2wz 1/1 Running 0 37m 192.168.56.12 k8s2 kube-system registry-adder-rbp5w 1/1 Running 0 38m 192.168.56.12 k8s2 kube-system registry-adder-vs774 1/1 Running 0 38m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-rkhsd cilium-wwp7p] cmd: kubectl exec -n kube-system cilium-rkhsd -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.90 (v1.13.90-e777a0ac) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 7/254 allocated from 10.0.0.0/24, IPv6: 7/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 43/43 healthy Proxy Status: OK, ip 10.0.0.39, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2309/65535 (3.52%), Flows/s: 17.09 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-29T22:28:18Z) Stderr: cmd: kubectl exec -n kube-system cilium-rkhsd -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 3 Disabled Disabled 10171 k8s:io.cilium.k8s.policy.cluster=default fd02::29 10.0.0.110 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 454 Disabled Disabled 1402 k8s:io.cilium.k8s.policy.cluster=default fd02::5b 10.0.0.62 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303292227k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 501 Disabled Disabled 4 reserved:health fd02::1 10.0.0.121 ready 739 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 1470 Disabled Disabled 17991 k8s:app=grafana fd02::e6 10.0.0.165 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 2286 Disabled Disabled 13468 k8s:io.cilium.k8s.policy.cluster=default fd02::a 10.0.0.180 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303292227k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3204 Disabled Disabled 55083 k8s:app=prometheus fd02::11 10.0.0.242 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring Stderr: cmd: kubectl exec -n kube-system cilium-wwp7p -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.90 (v1.13.90-e777a0ac) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.1.195, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 3551/65535 (5.42%), Flows/s: 41.31 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-29T22:28:09Z) Stderr: cmd: kubectl exec -n kube-system cilium-wwp7p -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 775 Disabled Disabled 1402 k8s:io.cilium.k8s.policy.cluster=default fd02::135 10.0.1.115 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303292227k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 1504 Disabled Disabled 4 reserved:health fd02::153 10.0.1.58 ready 2956 Disabled Disabled 13468 k8s:io.cilium.k8s.policy.cluster=default fd02::162 10.0.1.198 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303292227k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3244 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host Stderr: ===================== Exiting AfterFailed ===================== 22:28:32 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 22:28:32 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 22:28:32 STEP: Deleting deployment demo_hostfw.yaml 22:28:32 STEP: Deleting namespace 202303292227k8sdatapathconfighostfirewallwithvxlan 22:28:48 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|a42b1f75_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.19//691/artifact/a42b1f75_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.19//691/artifact/test_results_Cilium-PR-K8s-1.16-kernel-4.19_691_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.19/691/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24547 hit this flake with 97.53% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-30T12:49:54.217285777Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-30T12:49:54.217285777Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 1 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Interrupt received Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-d8gdj cilium-q94ds] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress coredns-758664cbbf-dwqv2 false false testclient-f46ln false false testclient-zb2n7 false false testserver-cf92s false false testserver-kzjbr false false Cilium agent 'cilium-d8gdj': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 Cilium agent 'cilium-q94ds': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0 ```
### Standard Error
Click to show. ```stack-error 12:47:01 STEP: Installing Cilium 12:47:03 STEP: Waiting for Cilium to become ready 12:48:48 STEP: Validating if Kubernetes DNS is deployed 12:48:48 STEP: Checking if deployment is ready 12:48:49 STEP: Checking if kube-dns service is plumbed correctly 12:48:49 STEP: Checking if pods have identity 12:48:49 STEP: Checking if DNS can resolve 12:48:52 STEP: Kubernetes DNS is up and operational 12:48:52 STEP: Validating Cilium Installation 12:48:52 STEP: Performing Cilium controllers preflight check 12:48:52 STEP: Performing Cilium health check 12:48:52 STEP: Performing Cilium status preflight check 12:48:52 STEP: Checking whether host EP regenerated 12:49:00 STEP: Performing Cilium service preflight check 12:49:00 STEP: Performing K8s service preflight check 12:49:06 STEP: Waiting for cilium-operator to be ready 12:49:06 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 12:49:06 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 12:49:06 STEP: Making sure all endpoints are in ready state 12:49:09 STEP: Creating namespace 202303301249k8sdatapathconfighostfirewallwithvxlan 12:49:09 STEP: Deploying demo_hostfw.yaml in namespace 202303301249k8sdatapathconfighostfirewallwithvxlan 12:49:09 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 12:49:09 STEP: WaitforNPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="") 12:49:21 STEP: WaitforNPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="") => 12:49:21 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 12:50:00 STEP: Checking host policies on egress to remote node 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:50:00 STEP: Checking host policies on egress to local pod 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:50:00 STEP: Checking host policies on ingress from local pod 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:50:00 STEP: Checking host policies on egress to remote pod 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:50:00 STEP: Checking host policies on ingress from remote node 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:50:00 STEP: Checking host policies on ingress from remote pod 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 12:50:00 STEP: WaitforPods(namespace="202303301249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-03-30T12:50:05Z==== 12:50:05 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-30T12:49:54.217285777Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 12:50:05 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303301249k8sdatapathconfighostfirewallwithvxlan testclient-f46ln 1/1 Running 0 61s 10.0.0.105 k8s2 202303301249k8sdatapathconfighostfirewallwithvxlan testclient-host-6s55s 1/1 Running 0 61s 192.168.56.12 k8s2 202303301249k8sdatapathconfighostfirewallwithvxlan testclient-host-nfszf 1/1 Running 0 61s 192.168.56.11 k8s1 202303301249k8sdatapathconfighostfirewallwithvxlan testclient-zb2n7 1/1 Running 0 61s 10.0.1.162 k8s1 202303301249k8sdatapathconfighostfirewallwithvxlan testserver-cf92s 2/2 Running 0 61s 10.0.1.17 k8s1 202303301249k8sdatapathconfighostfirewallwithvxlan testserver-host-fkmtn 2/2 Running 0 61s 192.168.56.11 k8s1 202303301249k8sdatapathconfighostfirewallwithvxlan testserver-host-w2nnt 2/2 Running 0 61s 192.168.56.12 k8s2 202303301249k8sdatapathconfighostfirewallwithvxlan testserver-kzjbr 2/2 Running 0 61s 10.0.0.201 k8s2 cilium-monitoring grafana-585bb89877-w4cbb 0/1 Running 0 40m 10.0.0.54 k8s2 cilium-monitoring prometheus-8885c5888-pq6gq 1/1 Running 0 40m 10.0.0.80 k8s2 kube-system cilium-d8gdj 1/1 Running 0 3m7s 192.168.56.11 k8s1 kube-system cilium-operator-5bdb4b9bfb-9td8s 1/1 Running 0 3m7s 192.168.56.11 k8s1 kube-system cilium-operator-5bdb4b9bfb-zm896 1/1 Running 0 3m7s 192.168.56.12 k8s2 kube-system cilium-q94ds 1/1 Running 0 3m7s 192.168.56.12 k8s2 kube-system coredns-758664cbbf-dwqv2 1/1 Running 0 31m 10.0.0.75 k8s2 kube-system etcd-k8s1 1/1 Running 0 44m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 43m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 4 44m 192.168.56.11 k8s1 kube-system kube-proxy-fqjcv 1/1 Running 0 41m 192.168.56.12 k8s2 kube-system kube-proxy-rhlph 1/1 Running 0 42m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 4 44m 192.168.56.11 k8s1 kube-system log-gatherer-755rd 1/1 Running 0 41m 192.168.56.11 k8s1 kube-system log-gatherer-v4f2v 1/1 Running 0 41m 192.168.56.12 k8s2 kube-system registry-adder-8r2pk 1/1 Running 0 41m 192.168.56.12 k8s2 kube-system registry-adder-qlhnt 1/1 Running 0 41m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-d8gdj cilium-q94ds] cmd: kubectl exec -n kube-system cilium-d8gdj -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-8d89c3f1) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.1.147, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 4401/65535 (6.72%), Flows/s: 45.97 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-30T12:48:59Z) Stderr: cmd: kubectl exec -n kube-system cilium-d8gdj -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 853 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 1205 Disabled Disabled 33320 k8s:io.cilium.k8s.policy.cluster=default fd02::198 10.0.1.17 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301249k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1797 Disabled Disabled 55546 k8s:io.cilium.k8s.policy.cluster=default fd02::106 10.0.1.162 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301249k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2724 Disabled Disabled 4 reserved:health fd02::194 10.0.1.193 ready Stderr: cmd: kubectl exec -n kube-system cilium-q94ds -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-8d89c3f1) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 33/33 healthy Proxy Status: OK, ip 10.0.0.217, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2345/65535 (3.58%), Flows/s: 16.84 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-30T12:50:02Z) Stderr: cmd: kubectl exec -n kube-system cilium-q94ds -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 366 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 436 Disabled Disabled 4 reserved:health fd02::6c 10.0.0.167 ready 516 Disabled Disabled 60842 k8s:io.cilium.k8s.policy.cluster=default fd02::89 10.0.0.75 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 770 Disabled Disabled 55546 k8s:io.cilium.k8s.policy.cluster=default fd02::d3 10.0.0.105 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301249k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 4041 Disabled Disabled 33320 k8s:io.cilium.k8s.policy.cluster=default fd02::76 10.0.0.201 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301249k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer Stderr: ===================== Exiting AfterFailed ===================== 12:50:17 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 12:50:17 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 12:50:17 STEP: Deleting deployment demo_hostfw.yaml 12:50:18 STEP: Deleting namespace 202303301249k8sdatapathconfighostfirewallwithvxlan 12:50:33 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|bcb4ff1e_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//4207/artifact/bcb4ff1e_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//4207/artifact/test_results_Cilium-PR-K8s-1.16-kernel-4.9_4207_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9/4207/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24550 hit this flake with 94.78% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-30T16:13:59.383282949Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-30T16:13:59.383282949Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Cilium pods: [cilium-59569 cilium-9xl7b] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress grafana-98b4b9789-w4c5j false false prometheus-6f66c554f4-xrm7j false false coredns-567b6dd84-vnvp7 false false testclient-4wpvm false false testclient-gth6c false false testserver-kzv9w false false testserver-v278d false false Cilium agent 'cilium-59569': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0 Cilium agent 'cilium-9xl7b': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 32 Failed 0 ```
### Standard Error
Click to show. ```stack-error 16:12:32 STEP: Installing Cilium 16:12:35 STEP: Waiting for Cilium to become ready 16:12:47 STEP: Validating if Kubernetes DNS is deployed 16:12:47 STEP: Checking if deployment is ready 16:12:47 STEP: Checking if kube-dns service is plumbed correctly 16:12:47 STEP: Checking if pods have identity 16:12:47 STEP: Checking if DNS can resolve 16:12:53 STEP: Kubernetes DNS is not ready: 5s timeout expired 16:12:53 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 16:12:53 STEP: Waiting for Kubernetes DNS to become operational 16:12:53 STEP: Checking if deployment is ready 16:12:53 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 16:12:54 STEP: Checking if deployment is ready 16:12:54 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 16:12:55 STEP: Checking if deployment is ready 16:12:55 STEP: Checking if kube-dns service is plumbed correctly 16:12:55 STEP: Checking if pods have identity 16:12:55 STEP: Checking if DNS can resolve 16:12:59 STEP: Validating Cilium Installation 16:12:59 STEP: Performing Cilium controllers preflight check 16:12:59 STEP: Performing Cilium health check 16:12:59 STEP: Checking whether host EP regenerated 16:12:59 STEP: Performing Cilium status preflight check 16:13:07 STEP: Performing Cilium service preflight check 16:13:07 STEP: Performing K8s service preflight check 16:13:12 STEP: Waiting for cilium-operator to be ready 16:13:12 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 16:13:12 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 16:13:12 STEP: Making sure all endpoints are in ready state 16:13:15 STEP: Creating namespace 202303301613k8sdatapathconfighostfirewallwithvxlan 16:13:15 STEP: Deploying demo_hostfw.yaml in namespace 202303301613k8sdatapathconfighostfirewallwithvxlan 16:13:15 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 16:13:15 STEP: WaitforNPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="") 16:13:18 STEP: WaitforNPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="") => 16:13:18 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 16:13:43 STEP: Checking host policies on egress to remote node 16:13:43 STEP: Checking host policies on egress to local pod 16:13:43 STEP: Checking host policies on ingress from local pod 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 16:13:43 STEP: Checking host policies on ingress from remote pod 16:13:43 STEP: Checking host policies on egress to remote pod 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 16:13:43 STEP: Checking host policies on ingress from remote node 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 16:13:43 STEP: WaitforPods(namespace="202303301613k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-03-30T16:13:49Z==== 16:13:49 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-30T16:13:59.383282949Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 16:14:06 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303301613k8sdatapathconfighostfirewallwithvxlan testclient-4wpvm 1/1 Running 0 56s 10.0.1.176 k8s2 202303301613k8sdatapathconfighostfirewallwithvxlan testclient-gth6c 1/1 Running 0 56s 10.0.0.64 k8s1 202303301613k8sdatapathconfighostfirewallwithvxlan testclient-host-6w6ds 1/1 Running 0 56s 192.168.56.11 k8s1 202303301613k8sdatapathconfighostfirewallwithvxlan testclient-host-jf7xz 1/1 Running 0 56s 192.168.56.12 k8s2 202303301613k8sdatapathconfighostfirewallwithvxlan testserver-host-2lbdx 2/2 Running 0 56s 192.168.56.12 k8s2 202303301613k8sdatapathconfighostfirewallwithvxlan testserver-host-m5b9h 2/2 Running 0 56s 192.168.56.11 k8s1 202303301613k8sdatapathconfighostfirewallwithvxlan testserver-kzv9w 2/2 Running 0 56s 10.0.1.6 k8s2 202303301613k8sdatapathconfighostfirewallwithvxlan testserver-v278d 2/2 Running 0 56s 10.0.0.58 k8s1 cilium-monitoring grafana-98b4b9789-w4c5j 1/1 Running 0 24m 10.0.0.199 k8s1 cilium-monitoring prometheus-6f66c554f4-xrm7j 1/1 Running 0 24m 10.0.0.246 k8s1 kube-system cilium-59569 1/1 Running 0 96s 192.168.56.11 k8s1 kube-system cilium-9xl7b 1/1 Running 0 96s 192.168.56.12 k8s2 kube-system cilium-operator-7dd676fffd-7zfxk 1/1 Running 1 (5s ago) 96s 192.168.56.12 k8s2 kube-system cilium-operator-7dd676fffd-9fs9m 1/1 Running 0 96s 192.168.56.11 k8s1 kube-system coredns-567b6dd84-vnvp7 1/1 Running 0 78s 10.0.1.226 k8s2 kube-system etcd-k8s1 1/1 Running 0 29m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 29m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 0 29m 192.168.56.11 k8s1 kube-system kube-proxy-kxbbw 1/1 Running 0 25m 192.168.56.12 k8s2 kube-system kube-proxy-mxm9t 1/1 Running 0 29m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 0 29m 192.168.56.11 k8s1 kube-system log-gatherer-x5xfh 1/1 Running 0 24m 192.168.56.12 k8s2 kube-system log-gatherer-zbdlz 1/1 Running 0 24m 192.168.56.11 k8s1 kube-system registry-adder-ldt5m 1/1 Running 0 25m 192.168.56.11 k8s1 kube-system registry-adder-p6g7n 1/1 Running 0 25m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-59569 cilium-9xl7b] cmd: kubectl exec -n kube-system cilium-59569 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.25 (v1.25.0) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-6f148ea4) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 6/254 allocated from 10.0.0.0/24, IPv6: 6/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 38/38 healthy Proxy Status: OK, ip 10.0.0.29, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 5529/65535 (8.44%), Flows/s: 64.06 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-30T16:13:53Z) Stderr: cmd: kubectl exec -n kube-system cilium-59569 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 166 Disabled Disabled 4 reserved:health fd02::f4 10.0.0.206 ready 965 Disabled Disabled 26784 k8s:app=grafana fd02::45 10.0.0.199 ready k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 1608 Disabled Disabled 42018 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303301613k8sdatapathconfighostfirewallwithvxlan fd02::97 10.0.0.58 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301613k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1633 Disabled Disabled 57182 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303301613k8sdatapathconfighostfirewallwithvxlan fd02::a7 10.0.0.64 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301613k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2981 Disabled Disabled 47896 k8s:app=prometheus fd02::6c 10.0.0.246 ready k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 3904 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/control-plane k8s:node.kubernetes.io/exclude-from-external-load-balancers k8s:status=lockdown reserved:host Stderr: cmd: kubectl exec -n kube-system cilium-9xl7b -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.25 (v1.25.0) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-6f148ea4) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 32/32 healthy Proxy Status: OK, ip 10.0.1.60, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 1590/65535 (2.43%), Flows/s: 17.94 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-30T16:13:12Z) Stderr: cmd: kubectl exec -n kube-system cilium-9xl7b -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 221 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 539 Disabled Disabled 4 reserved:health fd02::14c 10.0.1.88 ready 725 Disabled Disabled 42018 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303301613k8sdatapathconfighostfirewallwithvxlan fd02::177 10.0.1.6 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301613k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 867 Disabled Disabled 8030 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system fd02::1a7 10.0.1.226 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 2304 Disabled Disabled 57182 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303301613k8sdatapathconfighostfirewallwithvxlan fd02::131 10.0.1.176 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301613k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient Stderr: ===================== Exiting AfterFailed ===================== 16:14:20 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 16:14:20 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 16:14:20 STEP: Deleting deployment demo_hostfw.yaml 16:14:20 STEP: Deleting namespace 202303301613k8sdatapathconfighostfirewallwithvxlan 16:14:35 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|a2964aa4_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19//1536/artifact/a2964aa4_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19//1536/artifact/e44e1593_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19//1536/artifact/test_results_Cilium-PR-K8s-1.25-kernel-4.19_1536_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19/1536/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24607 hit this flake with 95.32% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-30T17:10:52.297198160Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-30T17:10:52.297198160Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 ⚠️ Number of "level=warning" in logs: 6 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: Unable to restore endpoint, ignoring Key allocation attempt failed Cilium pods: [cilium-69tzw cilium-h474b] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-c65mc false false testclient-wv95x false false testserver-b6kh8 false false testserver-jfswh false false grafana-698dc95f6c-74fxm false false prometheus-669755c8c5-d7p4j false false coredns-85fbf8f7dd-8ffrh false false Cilium agent 'cilium-69tzw': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 32 Failed 0 Cilium agent 'cilium-h474b': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0 ```
### Standard Error
Click to show. ```stack-error 17:08:16 STEP: Installing Cilium 17:08:18 STEP: Waiting for Cilium to become ready 17:09:09 STEP: Validating if Kubernetes DNS is deployed 17:09:09 STEP: Checking if deployment is ready 17:09:09 STEP: Checking if kube-dns service is plumbed correctly 17:09:09 STEP: Checking if pods have identity 17:09:09 STEP: Checking if DNS can resolve 17:09:15 STEP: Kubernetes DNS is not ready: 5s timeout expired 17:09:15 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 17:09:15 STEP: Waiting for Kubernetes DNS to become operational 17:09:15 STEP: Checking if deployment is ready 17:09:15 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 17:09:16 STEP: Checking if deployment is ready 17:09:16 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 17:09:17 STEP: Checking if deployment is ready 17:09:17 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 17:09:18 STEP: Checking if deployment is ready 17:09:18 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 17:09:19 STEP: Checking if deployment is ready 17:09:19 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 17:09:20 STEP: Checking if deployment is ready 17:09:20 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 17:09:21 STEP: Checking if deployment is ready 17:09:21 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 17:09:22 STEP: Checking if deployment is ready 17:09:22 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 17:09:23 STEP: Checking if deployment is ready 17:09:23 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 17:09:24 STEP: Checking if deployment is ready 17:09:24 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 17:09:25 STEP: Checking if deployment is ready 17:09:25 STEP: Checking if kube-dns service is plumbed correctly 17:09:25 STEP: Checking if pods have identity 17:09:25 STEP: Checking if DNS can resolve 17:09:29 STEP: Validating Cilium Installation 17:09:29 STEP: Performing Cilium controllers preflight check 17:09:29 STEP: Performing Cilium status preflight check 17:09:29 STEP: Performing Cilium health check 17:09:29 STEP: Checking whether host EP regenerated 17:09:36 STEP: Performing Cilium service preflight check 17:09:36 STEP: Performing K8s service preflight check 17:09:37 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-h474b': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init) Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 17:09:37 STEP: Performing Cilium controllers preflight check 17:09:37 STEP: Performing Cilium health check 17:09:37 STEP: Performing Cilium status preflight check 17:09:37 STEP: Checking whether host EP regenerated 17:09:45 STEP: Performing Cilium service preflight check 17:09:45 STEP: Performing K8s service preflight check 17:09:46 STEP: Performing Cilium controllers preflight check 17:09:46 STEP: Performing Cilium status preflight check 17:09:46 STEP: Performing Cilium health check 17:09:46 STEP: Checking whether host EP regenerated 17:09:54 STEP: Performing Cilium service preflight check 17:09:54 STEP: Performing K8s service preflight check 17:09:54 STEP: Performing Cilium controllers preflight check 17:09:54 STEP: Performing Cilium status preflight check 17:09:54 STEP: Performing Cilium health check 17:09:54 STEP: Checking whether host EP regenerated 17:10:02 STEP: Performing Cilium service preflight check 17:10:02 STEP: Performing K8s service preflight check 17:10:03 STEP: Performing Cilium controllers preflight check 17:10:03 STEP: Checking whether host EP regenerated 17:10:03 STEP: Performing Cilium status preflight check 17:10:03 STEP: Performing Cilium health check 17:10:10 STEP: Performing Cilium service preflight check 17:10:10 STEP: Performing K8s service preflight check 17:10:16 STEP: Waiting for cilium-operator to be ready 17:10:17 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 17:10:17 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 17:10:17 STEP: Making sure all endpoints are in ready state 17:10:19 STEP: Creating namespace 202303301710k8sdatapathconfighostfirewallwithvxlan 17:10:19 STEP: Deploying demo_hostfw.yaml in namespace 202303301710k8sdatapathconfighostfirewallwithvxlan 17:10:20 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 17:10:20 STEP: WaitforNPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="") 17:10:24 STEP: WaitforNPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="") => 17:10:24 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 17:10:48 STEP: Checking host policies on egress to remote node 17:10:48 STEP: Checking host policies on egress to local pod 17:10:48 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:10:48 STEP: Checking host policies on ingress from remote node 17:10:48 STEP: Checking host policies on ingress from local pod 17:10:48 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:10:48 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 17:10:48 STEP: Checking host policies on ingress from remote pod 17:10:48 STEP: Checking host policies on egress to remote pod 17:10:48 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:10:48 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 17:10:48 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:10:48 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 17:10:48 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:10:48 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:10:48 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 17:10:48 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 17:10:48 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:10:49 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 17:10:49 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:10:49 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:10:49 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:10:49 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 17:10:49 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:10:49 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 17:10:49 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:10:49 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 17:10:49 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 17:10:49 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 17:10:49 STEP: WaitforPods(namespace="202303301710k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-03-30T17:10:54Z==== 17:10:54 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-30T17:10:52.297198160Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 17:10:55 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303301710k8sdatapathconfighostfirewallwithvxlan testclient-c65mc 1/1 Running 0 39s 10.0.1.164 k8s1 202303301710k8sdatapathconfighostfirewallwithvxlan testclient-host-7hw5d 1/1 Running 0 39s 192.168.56.11 k8s1 202303301710k8sdatapathconfighostfirewallwithvxlan testclient-host-8xvhn 1/1 Running 0 39s 192.168.56.12 k8s2 202303301710k8sdatapathconfighostfirewallwithvxlan testclient-wv95x 1/1 Running 0 39s 10.0.0.244 k8s2 202303301710k8sdatapathconfighostfirewallwithvxlan testserver-b6kh8 2/2 Running 0 39s 10.0.1.195 k8s1 202303301710k8sdatapathconfighostfirewallwithvxlan testserver-host-lrrrd 2/2 Running 0 39s 192.168.56.11 k8s1 202303301710k8sdatapathconfighostfirewallwithvxlan testserver-host-xgt6t 2/2 Running 0 39s 192.168.56.12 k8s2 202303301710k8sdatapathconfighostfirewallwithvxlan testserver-jfswh 2/2 Running 0 39s 10.0.0.29 k8s2 cilium-monitoring grafana-698dc95f6c-74fxm 1/1 Running 0 46m 10.0.0.172 k8s2 cilium-monitoring prometheus-669755c8c5-d7p4j 1/1 Running 0 46m 10.0.0.36 k8s2 kube-system cilium-69tzw 1/1 Running 0 2m41s 192.168.56.11 k8s1 kube-system cilium-h474b 1/1 Running 0 2m41s 192.168.56.12 k8s2 kube-system cilium-operator-64c84c77dc-bl9qz 1/1 Running 0 2m41s 192.168.56.11 k8s1 kube-system cilium-operator-64c84c77dc-cx8cl 1/1 Running 0 2m41s 192.168.56.12 k8s2 kube-system coredns-85fbf8f7dd-8ffrh 1/1 Running 0 104s 10.0.1.10 k8s1 kube-system etcd-k8s1 1/1 Running 0 51m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 51m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 3 51m 192.168.56.11 k8s1 kube-system kube-proxy-cwzbw 1/1 Running 0 51m 192.168.56.11 k8s1 kube-system kube-proxy-mszsm 1/1 Running 0 47m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 3 51m 192.168.56.11 k8s1 kube-system log-gatherer-qcq5b 1/1 Running 0 46m 192.168.56.11 k8s1 kube-system log-gatherer-s7kcq 1/1 Running 0 46m 192.168.56.12 k8s2 kube-system registry-adder-5z8ds 1/1 Running 0 47m 192.168.56.11 k8s1 kube-system registry-adder-j6x8k 1/1 Running 0 47m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-69tzw cilium-h474b] cmd: kubectl exec -n kube-system cilium-69tzw -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.21 (v1.21.14) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-7a2012f7) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 32/32 healthy Proxy Status: OK, ip 10.0.1.2, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 7114/65535 (10.86%), Flows/s: 64.14 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-30T17:10:18Z) Stderr: cmd: kubectl exec -n kube-system cilium-69tzw -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 204 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/control-plane k8s:node-role.kubernetes.io/master k8s:node.kubernetes.io/exclude-from-external-load-balancers k8s:status=lockdown reserved:host 454 Disabled Disabled 42159 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303301710k8sdatapathconfighostfirewallwithvxlan fd02::14c 10.0.1.164 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301710k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 826 Disabled Disabled 4 reserved:health fd02::1ca 10.0.1.76 ready 2910 Disabled Disabled 2838 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303301710k8sdatapathconfighostfirewallwithvxlan fd02::1df 10.0.1.195 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301710k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3686 Disabled Disabled 3752 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system fd02::103 10.0.1.10 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns Stderr: cmd: kubectl exec -n kube-system cilium-h474b -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.21 (v1.21.14) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumEndpointSlice", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-7a2012f7) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 6/254 allocated from 10.0.0.0/24, IPv6: 6/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 38/38 healthy Proxy Status: OK, ip 10.0.0.94, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2644/65535 (4.03%), Flows/s: 22.29 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-30T17:10:16Z) Stderr: cmd: kubectl exec -n kube-system cilium-h474b -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 1605 Disabled Disabled 42159 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303301710k8sdatapathconfighostfirewallwithvxlan fd02::95 10.0.0.244 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301710k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 1647 Disabled Disabled 2838 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202303301710k8sdatapathconfighostfirewallwithvxlan fd02::65 10.0.0.29 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301710k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1697 Disabled Disabled 37948 k8s:app=prometheus fd02::e6 10.0.0.36 ready k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 1804 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 1875 Disabled Disabled 34288 k8s:app=grafana fd02::7e 10.0.0.172 ready k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 2078 Disabled Disabled 4 reserved:health fd02::63 10.0.0.208 ready Stderr: ===================== Exiting AfterFailed ===================== 17:11:08 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 17:11:08 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 17:11:08 STEP: Deleting deployment demo_hostfw.yaml 17:11:08 STEP: Deleting namespace 202303301710k8sdatapathconfighostfirewallwithvxlan 17:11:23 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|de53ea4d_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9//2516/artifact/de53ea4d_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9//2516/artifact/test_results_Cilium-PR-K8s-1.21-kernel-4.9_2516_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9/2516/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24607 hit this flake with 95.87% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-30T17:34:59.931244051Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-30T17:34:59.931244051Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 2 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Interrupt received Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-ld6kb cilium-m9997] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress coredns-bb76b858c-8t695 false false testclient-85wvr false false testclient-9r7kw false false testserver-hg78b false false testserver-jz7pd false false Cilium agent 'cilium-ld6kb': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 Cilium agent 'cilium-m9997': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0 ```
### Standard Error
Click to show. ```stack-error 17:31:53 STEP: Installing Cilium 17:31:56 STEP: Waiting for Cilium to become ready 17:34:01 STEP: Validating if Kubernetes DNS is deployed 17:34:01 STEP: Checking if deployment is ready 17:34:01 STEP: Checking if kube-dns service is plumbed correctly 17:34:01 STEP: Checking if pods have identity 17:34:01 STEP: Checking if DNS can resolve 17:34:05 STEP: Kubernetes DNS is up and operational 17:34:05 STEP: Validating Cilium Installation 17:34:05 STEP: Performing Cilium controllers preflight check 17:34:05 STEP: Performing Cilium status preflight check 17:34:05 STEP: Performing Cilium health check 17:34:05 STEP: Checking whether host EP regenerated 17:34:12 STEP: Performing Cilium service preflight check 17:34:12 STEP: Performing K8s service preflight check 17:34:18 STEP: Waiting for cilium-operator to be ready 17:34:19 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 17:34:19 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 17:34:19 STEP: Making sure all endpoints are in ready state 17:34:21 STEP: Creating namespace 202303301734k8sdatapathconfighostfirewallwithvxlan 17:34:21 STEP: Deploying demo_hostfw.yaml in namespace 202303301734k8sdatapathconfighostfirewallwithvxlan 17:34:22 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 17:34:22 STEP: WaitforNPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="") 17:34:31 STEP: WaitforNPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="") => 17:34:31 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 17:34:54 STEP: Checking host policies on egress to remote node 17:34:54 STEP: Checking host policies on ingress from remote node 17:34:54 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:34:54 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:34:54 STEP: Checking host policies on egress to local pod 17:34:54 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:34:54 STEP: Checking host policies on ingress from local pod 17:34:54 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 17:34:54 STEP: Checking host policies on ingress from remote pod 17:34:54 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 17:34:54 STEP: Checking host policies on egress to remote pod 17:34:54 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:34:54 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:34:54 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 17:34:54 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:34:54 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:34:54 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 17:34:54 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 17:34:55 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:34:55 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:34:55 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 17:34:55 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 17:34:55 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 17:34:55 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 17:34:55 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 17:34:55 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 17:34:55 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 17:34:55 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 17:34:55 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 17:34:55 STEP: WaitforPods(namespace="202303301734k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-03-30T17:35:00Z==== 17:35:00 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-30T17:34:59.931244051Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 17:35:01 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303301734k8sdatapathconfighostfirewallwithvxlan testclient-85wvr 1/1 Running 0 43s 10.0.1.5 k8s1 202303301734k8sdatapathconfighostfirewallwithvxlan testclient-9r7kw 1/1 Running 0 43s 10.0.0.244 k8s2 202303301734k8sdatapathconfighostfirewallwithvxlan testclient-host-98tq5 1/1 Running 0 43s 192.168.56.11 k8s1 202303301734k8sdatapathconfighostfirewallwithvxlan testclient-host-qzhlq 1/1 Running 0 43s 192.168.56.12 k8s2 202303301734k8sdatapathconfighostfirewallwithvxlan testserver-hg78b 2/2 Running 0 43s 10.0.1.247 k8s1 202303301734k8sdatapathconfighostfirewallwithvxlan testserver-host-pcmmg 2/2 Running 0 43s 192.168.56.11 k8s1 202303301734k8sdatapathconfighostfirewallwithvxlan testserver-host-rg9p4 2/2 Running 0 43s 192.168.56.12 k8s2 202303301734k8sdatapathconfighostfirewallwithvxlan testserver-jz7pd 2/2 Running 0 43s 10.0.0.159 k8s2 cilium-monitoring grafana-7ddfc74b5b-qdbwq 0/1 Running 0 76m 10.0.0.246 k8s2 cilium-monitoring prometheus-669755c8c5-j4wsm 1/1 Running 0 76m 10.0.0.19 k8s2 kube-system cilium-ld6kb 1/1 Running 0 3m9s 192.168.56.12 k8s2 kube-system cilium-m9997 1/1 Running 0 3m9s 192.168.56.11 k8s1 kube-system cilium-operator-776c8f6f68-cxlfk 1/1 Running 0 3m9s 192.168.56.12 k8s2 kube-system cilium-operator-776c8f6f68-x4t4j 1/1 Running 0 3m9s 192.168.56.11 k8s1 kube-system coredns-bb76b858c-8t695 1/1 Running 0 7m21s 10.0.1.145 k8s1 kube-system etcd-k8s1 1/1 Running 0 81m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 81m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 6 81m 192.168.56.11 k8s1 kube-system kube-proxy-6b5wg 1/1 Running 0 77m 192.168.56.11 k8s1 kube-system kube-proxy-lptbh 1/1 Running 0 77m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 7 81m 192.168.56.11 k8s1 kube-system log-gatherer-59bmp 1/1 Running 0 76m 192.168.56.12 k8s2 kube-system log-gatherer-c2rhd 1/1 Running 0 76m 192.168.56.11 k8s1 kube-system registry-adder-pmxzk 1/1 Running 0 77m 192.168.56.12 k8s2 kube-system registry-adder-xnjrs 1/1 Running 0 77m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-ld6kb cilium-m9997] cmd: kubectl exec -n kube-system cilium-ld6kb -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.19 (v1.19.16) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-7a2012f7) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.0.211, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 1763/65535 (2.69%), Flows/s: 19.11 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-30T17:34:12Z) Stderr: cmd: kubectl exec -n kube-system cilium-ld6kb -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 565 Disabled Disabled 4 reserved:health fd02::be 10.0.0.80 ready 1204 Disabled Disabled 6836 k8s:io.cilium.k8s.policy.cluster=default fd02::ba 10.0.0.159 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301734k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1602 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 3873 Disabled Disabled 41308 k8s:io.cilium.k8s.policy.cluster=default fd02::34 10.0.0.244 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301734k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient Stderr: cmd: kubectl exec -n kube-system cilium-m9997 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.19 (v1.19.16) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-7a2012f7) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 33/33 healthy Proxy Status: OK, ip 10.0.1.174, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 3868/65535 (5.90%), Flows/s: 42.52 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-30T17:34:18Z) Stderr: cmd: kubectl exec -n kube-system cilium-m9997 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 143 Disabled Disabled 41308 k8s:io.cilium.k8s.policy.cluster=default fd02::1cb 10.0.1.5 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301734k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 723 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 1191 Disabled Disabled 4 reserved:health fd02::10d 10.0.1.116 ready 1794 Disabled Disabled 2012 k8s:io.cilium.k8s.policy.cluster=default fd02::130 10.0.1.145 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1986 Disabled Disabled 6836 k8s:io.cilium.k8s.policy.cluster=default fd02::119 10.0.1.247 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301734k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer Stderr: ===================== Exiting AfterFailed ===================== 17:35:13 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 17:35:13 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 17:35:13 STEP: Deleting deployment demo_hostfw.yaml 17:35:13 STEP: Deleting namespace 202303301734k8sdatapathconfighostfirewallwithvxlan 17:35:29 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|75acba25_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//2747/artifact/75acba25_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//2747/artifact/test_results_Cilium-PR-K8s-1.19-kernel-4.9_2747_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9/2747/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24547 hit this flake with 97.84% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-30T18:33:04.449250483Z level=error msg="Failed to release lock: Put \"https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock?timeout=5s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" subsys=klog /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-03-30T18:33:04.449250483Z level=error msg=\"Failed to release lock: Put \\\"https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock?timeout=5s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\" subsys=klog" in logs 1 times Number of "context deadline exceeded" in logs: 2 Number of "level=error" in logs: 3 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \ Failed to release lock: Put \ Cilium pods: [cilium-fs9t8 cilium-n77rm] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-nllc2 false false testclient-zdq6f false false testserver-p6gwv false false testserver-stj5x false false coredns-758664cbbf-dzqnr false false Cilium agent 'cilium-fs9t8': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 Cilium agent 'cilium-n77rm': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 32 Failed 0 ```
### Standard Error
Click to show. ```stack-error 18:30:13 STEP: Installing Cilium 18:30:15 STEP: Waiting for Cilium to become ready 18:31:59 STEP: Validating if Kubernetes DNS is deployed 18:31:59 STEP: Checking if deployment is ready 18:31:59 STEP: Checking if kube-dns service is plumbed correctly 18:31:59 STEP: Checking if pods have identity 18:31:59 STEP: Checking if DNS can resolve 18:32:04 STEP: Kubernetes DNS is not ready: 5s timeout expired 18:32:04 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 18:32:04 STEP: Waiting for Kubernetes DNS to become operational 18:32:04 STEP: Checking if deployment is ready 18:32:04 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:32:05 STEP: Checking if deployment is ready 18:32:05 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:32:06 STEP: Checking if deployment is ready 18:32:06 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:32:06 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-fs9t8: unable to find service backend 10.0.1.98:53 in datapath of cilium pod cilium-fs9t8 18:32:07 STEP: Checking if deployment is ready 18:32:07 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:32:08 STEP: Checking if deployment is ready 18:32:08 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:32:09 STEP: Checking if deployment is ready 18:32:09 STEP: Checking if kube-dns service is plumbed correctly 18:32:09 STEP: Checking if DNS can resolve 18:32:09 STEP: Checking if pods have identity 18:32:13 STEP: Kubernetes DNS is not ready yet: CiliumEndpoint does not exist 18:32:13 STEP: Checking if deployment is ready 18:32:13 STEP: Checking if kube-dns service is plumbed correctly 18:32:13 STEP: Checking if pods have identity 18:32:13 STEP: Checking if DNS can resolve 18:32:17 STEP: Validating Cilium Installation 18:32:17 STEP: Performing Cilium controllers preflight check 18:32:17 STEP: Performing Cilium health check 18:32:17 STEP: Checking whether host EP regenerated 18:32:17 STEP: Performing Cilium status preflight check 18:32:25 STEP: Performing Cilium service preflight check 18:32:25 STEP: Performing K8s service preflight check 18:32:30 STEP: Waiting for cilium-operator to be ready 18:32:31 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 18:32:31 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 18:32:31 STEP: Making sure all endpoints are in ready state 18:32:33 STEP: Creating namespace 202303301832k8sdatapathconfighostfirewallwithvxlan 18:32:33 STEP: Deploying demo_hostfw.yaml in namespace 202303301832k8sdatapathconfighostfirewallwithvxlan 18:32:34 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 18:32:34 STEP: WaitforNPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="") 18:32:45 STEP: WaitforNPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="") => 18:32:45 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 18:33:00 STEP: Checking host policies on egress to remote node 18:33:00 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 18:33:00 STEP: Checking host policies on egress to local pod 18:33:00 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 18:33:00 STEP: Checking host policies on ingress from local pod 18:33:00 STEP: Checking host policies on ingress from remote pod 18:33:00 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 18:33:00 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 18:33:00 STEP: Checking host policies on egress to remote pod 18:33:00 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 18:33:00 STEP: Checking host policies on ingress from remote node 18:33:00 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 18:33:00 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 18:33:00 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 18:33:00 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 18:33:00 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 18:33:00 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 18:33:00 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 18:33:01 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 18:33:01 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 18:33:01 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 18:33:01 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 18:33:01 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 18:33:01 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 18:33:01 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 18:33:01 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 18:33:01 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 18:33:01 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 18:33:01 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 18:33:01 STEP: WaitforPods(namespace="202303301832k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-03-30T18:33:06Z==== 18:33:06 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-03-30T18:33:04.449250483Z level=error msg="Failed to release lock: Put \"https://10.96.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock?timeout=5s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" subsys=klog ===================== TEST FAILED ===================== 18:33:09 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202303301832k8sdatapathconfighostfirewallwithvxlan testclient-host-4p4tp 1/1 Running 0 40s 192.168.56.12 k8s2 202303301832k8sdatapathconfighostfirewallwithvxlan testclient-host-ljxr6 1/1 Running 0 40s 192.168.56.11 k8s1 202303301832k8sdatapathconfighostfirewallwithvxlan testclient-nllc2 1/1 Running 0 40s 10.0.1.241 k8s2 202303301832k8sdatapathconfighostfirewallwithvxlan testclient-zdq6f 1/1 Running 0 40s 10.0.0.40 k8s1 202303301832k8sdatapathconfighostfirewallwithvxlan testserver-host-bsptn 2/2 Running 0 40s 192.168.56.12 k8s2 202303301832k8sdatapathconfighostfirewallwithvxlan testserver-host-cpfwx 2/2 Running 0 40s 192.168.56.11 k8s1 202303301832k8sdatapathconfighostfirewallwithvxlan testserver-p6gwv 2/2 Running 0 40s 10.0.0.138 k8s1 202303301832k8sdatapathconfighostfirewallwithvxlan testserver-stj5x 2/2 Running 0 40s 10.0.1.162 k8s2 cilium-monitoring grafana-585bb89877-wfqfw 0/1 Running 0 57m 10.0.1.97 k8s2 cilium-monitoring prometheus-8885c5888-cnk86 1/1 Running 0 57m 10.0.1.54 k8s2 kube-system cilium-fs9t8 1/1 Running 0 2m59s 192.168.56.11 k8s1 kube-system cilium-n77rm 1/1 Running 0 2m59s 192.168.56.12 k8s2 kube-system cilium-operator-5bdb4b9bfb-n6vxv 1/1 Running 0 2m59s 192.168.56.12 k8s2 kube-system cilium-operator-5bdb4b9bfb-sl2kk 1/1 Running 0 2m59s 192.168.56.11 k8s1 kube-system coredns-758664cbbf-dzqnr 1/1 Running 0 70s 10.0.1.179 k8s2 kube-system etcd-k8s1 1/1 Running 0 60m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 60m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 3 60m 192.168.56.11 k8s1 kube-system kube-proxy-hkjhj 1/1 Running 0 58m 192.168.56.12 k8s2 kube-system kube-proxy-qq77t 1/1 Running 0 61m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 3 60m 192.168.56.11 k8s1 kube-system log-gatherer-fdjnz 1/1 Running 0 57m 192.168.56.12 k8s2 kube-system log-gatherer-frx5c 1/1 Running 0 57m 192.168.56.11 k8s1 kube-system registry-adder-6rpmb 1/1 Running 0 58m 192.168.56.11 k8s1 kube-system registry-adder-bgqsn 1/1 Running 0 58m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-fs9t8 cilium-n77rm] cmd: kubectl exec -n kube-system cilium-fs9t8 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-8d89c3f1) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.0.175, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 3696/65535 (5.64%), Flows/s: 45.67 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-30T18:32:59Z) Stderr: cmd: kubectl exec -n kube-system cilium-fs9t8 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 41 Disabled Disabled 2867 k8s:io.cilium.k8s.policy.cluster=default fd02::5e 10.0.0.40 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301832k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 1970 Disabled Disabled 6087 k8s:io.cilium.k8s.policy.cluster=default fd02::c8 10.0.0.138 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301832k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3225 Disabled Disabled 4 reserved:health fd02::86 10.0.0.67 ready 3488 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host Stderr: cmd: kubectl exec -n kube-system cilium-n77rm -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-8d89c3f1) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 32/32 healthy Proxy Status: OK, ip 10.0.1.193, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2315/65535 (3.53%), Flows/s: 17.20 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-03-30T18:32:30Z) Stderr: cmd: kubectl exec -n kube-system cilium-n77rm -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 639 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 1047 Disabled Disabled 6087 k8s:io.cilium.k8s.policy.cluster=default fd02::1d9 10.0.1.162 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301832k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3257 Disabled Disabled 4 reserved:health fd02::1fb 10.0.1.199 ready 3559 Disabled Disabled 12501 k8s:io.cilium.k8s.policy.cluster=default fd02::18a 10.0.1.179 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 3815 Disabled Disabled 2867 k8s:io.cilium.k8s.policy.cluster=default fd02::1b1 10.0.1.241 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202303301832k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient Stderr: ===================== Exiting AfterFailed ===================== 18:33:22 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig Host firewall 18:33:22 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 18:33:22 STEP: Deleting deployment demo_hostfw.yaml 18:33:22 STEP: Deleting namespace 202303301832k8sdatapathconfighostfirewallwithvxlan 18:33:38 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|f848d67c_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//4210/artifact/f848d67c_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//4210/artifact/test_results_Cilium-PR-K8s-1.16-kernel-4.9_4210_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9/4210/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24547 hit this flake with 95.87% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.18-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-06T12:49:45.090950790Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.18-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-04-06T12:49:45.090950790Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 1 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-fx57f cilium-prhsx] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-n8dwj false false testclient-sfz6v false false testserver-lp5gc false false testserver-mwcdp false false grafana-677f4bb779-xln8k false false prometheus-579ff57bbb-np8ck false false coredns-66585574f-vrx95 false false Cilium agent 'cilium-fx57f': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 27 Failed 0 Cilium agent 'cilium-prhsx': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 43 Failed 0 ```
### Standard Error
Click to show. ```stack-error 12:46:57 STEP: Installing Cilium 12:46:59 STEP: Waiting for Cilium to become ready 12:48:10 STEP: Validating if Kubernetes DNS is deployed 12:48:10 STEP: Checking if deployment is ready 12:48:10 STEP: Checking if kube-dns service is plumbed correctly 12:48:10 STEP: Checking if pods have identity 12:48:10 STEP: Checking if DNS can resolve 12:48:14 STEP: Kubernetes DNS is up and operational 12:48:14 STEP: Validating Cilium Installation 12:48:14 STEP: Performing Cilium controllers preflight check 12:48:14 STEP: Performing Cilium health check 12:48:14 STEP: Checking whether host EP regenerated 12:48:14 STEP: Performing Cilium status preflight check 12:48:21 STEP: Performing Cilium service preflight check 12:48:21 STEP: Performing K8s service preflight check 12:48:22 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-prhsx': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 12:48:22 STEP: Performing Cilium controllers preflight check 12:48:22 STEP: Performing Cilium status preflight check 12:48:22 STEP: Performing Cilium health check 12:48:22 STEP: Checking whether host EP regenerated 12:48:30 STEP: Performing Cilium service preflight check 12:48:30 STEP: Performing K8s service preflight check 12:48:31 STEP: Performing Cilium controllers preflight check 12:48:31 STEP: Performing Cilium health check 12:48:31 STEP: Performing Cilium status preflight check 12:48:31 STEP: Checking whether host EP regenerated 12:48:38 STEP: Performing Cilium service preflight check 12:48:38 STEP: Performing K8s service preflight check 12:48:39 STEP: Performing Cilium status preflight check 12:48:39 STEP: Performing Cilium health check 12:48:39 STEP: Performing Cilium controllers preflight check 12:48:39 STEP: Checking whether host EP regenerated 12:48:46 STEP: Performing Cilium service preflight check 12:48:46 STEP: Performing K8s service preflight check 12:48:47 STEP: Performing Cilium controllers preflight check 12:48:47 STEP: Performing Cilium health check 12:48:47 STEP: Checking whether host EP regenerated 12:48:47 STEP: Performing Cilium status preflight check 12:48:55 STEP: Performing Cilium service preflight check 12:48:55 STEP: Performing K8s service preflight check 12:49:01 STEP: Waiting for cilium-operator to be ready 12:49:01 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 12:49:01 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 12:49:01 STEP: Making sure all endpoints are in ready state 12:49:04 STEP: Creating namespace 202304061249k8sdatapathconfighostfirewallwithvxlan 12:49:04 STEP: Deploying demo_hostfw.yaml in namespace 202304061249k8sdatapathconfighostfirewallwithvxlan 12:49:04 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 12:49:04 STEP: WaitforNPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="") 12:49:14 STEP: WaitforNPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="") => 12:49:14 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.18-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 12:49:32 STEP: Checking host policies on egress to remote node 12:49:32 STEP: Checking host policies on ingress from remote pod 12:49:32 STEP: Checking host policies on ingress from remote node 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:49:32 STEP: Checking host policies on ingress from local pod 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:49:32 STEP: Checking host policies on egress to local pod 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:49:32 STEP: Checking host policies on egress to remote pod 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:49:32 STEP: WaitforPods(namespace="202304061249k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => === Test Finished at 2023-04-06T12:49:53Z==== 12:49:53 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-06T12:49:45.090950790Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 12:49:54 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202304061249k8sdatapathconfighostfirewallwithvxlan testclient-host-cp8kq 1/1 Running 0 54s 192.168.56.11 k8s1 202304061249k8sdatapathconfighostfirewallwithvxlan testclient-host-z4jmm 1/1 Running 0 54s 192.168.56.12 k8s2 202304061249k8sdatapathconfighostfirewallwithvxlan testclient-n8dwj 1/1 Running 0 54s 10.0.1.55 k8s2 202304061249k8sdatapathconfighostfirewallwithvxlan testclient-sfz6v 1/1 Running 0 54s 10.0.0.231 k8s1 202304061249k8sdatapathconfighostfirewallwithvxlan testserver-host-2b5tw 2/2 Running 0 54s 192.168.56.12 k8s2 202304061249k8sdatapathconfighostfirewallwithvxlan testserver-host-cb5k6 2/2 Running 0 54s 192.168.56.11 k8s1 202304061249k8sdatapathconfighostfirewallwithvxlan testserver-lp5gc 2/2 Running 0 54s 10.0.1.233 k8s2 202304061249k8sdatapathconfighostfirewallwithvxlan testserver-mwcdp 2/2 Running 0 54s 10.0.0.189 k8s1 cilium-monitoring grafana-677f4bb779-xln8k 1/1 Running 0 48m 10.0.1.97 k8s2 cilium-monitoring prometheus-579ff57bbb-np8ck 1/1 Running 0 48m 10.0.1.235 k8s2 kube-system cilium-fx57f 1/1 Running 0 2m59s 192.168.56.11 k8s1 kube-system cilium-operator-67fc8cfc85-7f2lf 1/1 Running 0 2m59s 192.168.56.12 k8s2 kube-system cilium-operator-67fc8cfc85-zbwq8 1/1 Running 0 2m59s 192.168.56.11 k8s1 kube-system cilium-prhsx 1/1 Running 0 2m59s 192.168.56.12 k8s2 kube-system coredns-66585574f-vrx95 1/1 Running 0 47m 10.0.1.28 k8s2 kube-system etcd-k8s1 1/1 Running 0 52m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 52m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 2 52m 192.168.56.11 k8s1 kube-system kube-proxy-bnbbn 1/1 Running 0 49m 192.168.56.12 k8s2 kube-system kube-proxy-mjfvt 1/1 Running 0 52m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 3 52m 192.168.56.11 k8s1 kube-system log-gatherer-5zdtl 1/1 Running 0 48m 192.168.56.12 k8s2 kube-system log-gatherer-vkmqn 1/1 Running 0 48m 192.168.56.11 k8s1 kube-system registry-adder-2ppjn 1/1 Running 0 49m 192.168.56.12 k8s2 kube-system registry-adder-sf4dd 1/1 Running 0 49m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-fx57f cilium-prhsx] cmd: kubectl exec -n kube-system cilium-fx57f -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.18 (v1.18.20) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-ae078fdf) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 27/27 healthy Proxy Status: OK, ip 10.0.0.73, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 6176/65535 (9.42%), Flows/s: 48.92 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-06T12:49:58Z) Stderr: cmd: kubectl exec -n kube-system cilium-fx57f -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 144 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 316 Disabled Disabled 4 reserved:health fd02::ae 10.0.0.65 ready 1943 Disabled Disabled 34393 k8s:io.cilium.k8s.policy.cluster=default fd02::4e 10.0.0.189 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304061249k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3410 Disabled Disabled 12579 k8s:io.cilium.k8s.policy.cluster=default fd02::13 10.0.0.231 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304061249k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient Stderr: cmd: kubectl exec -n kube-system cilium-prhsx -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.18 (v1.18.20) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-ae078fdf) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 7/254 allocated from 10.0.1.0/24, IPv6: 7/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 43/43 healthy Proxy Status: OK, ip 10.0.1.205, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 3189/65535 (4.87%), Flows/s: 23.05 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-06T12:49:48Z) Stderr: cmd: kubectl exec -n kube-system cilium-prhsx -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 1041 Disabled Disabled 20345 k8s:app=prometheus fd02::107 10.0.1.235 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 1492 Disabled Disabled 4 reserved:health fd02::1a8 10.0.1.98 ready 1646 Disabled Disabled 34393 k8s:io.cilium.k8s.policy.cluster=default fd02::135 10.0.1.233 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304061249k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1651 Disabled Disabled 12579 k8s:io.cilium.k8s.policy.cluster=default fd02::179 10.0.1.55 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304061249k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 1792 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 2461 Disabled Disabled 26033 k8s:io.cilium.k8s.policy.cluster=default fd02::128 10.0.1.28 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 3690 Disabled Disabled 65076 k8s:app=grafana fd02::10f 10.0.1.97 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring Stderr: ===================== Exiting AfterFailed ===================== 12:50:15 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 12:50:15 STEP: Deleting deployment demo_hostfw.yaml 12:50:15 STEP: Deleting namespace 202304061249k8sdatapathconfighostfirewallwithvxlan 12:50:31 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|867ae5f0_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2617/artifact/867ae5f0_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2617/artifact/test_results_Cilium-PR-K8s-1.18-kernel-4.9_2617_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9/2617/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24547 hit this flake with 97.53% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-06T13:12:36.971013541Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-04-06T13:12:36.971013541Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 1 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 ⚠️ Number of "level=warning" in logs: 7 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: Unable to restore endpoint, ignoring Key allocation attempt failed Cilium pods: [cilium-w54s2 cilium-wl6g2] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testserver-hld4s false false coredns-758664cbbf-pnzwr false false testclient-5wcfl false false testclient-7zskz false false testserver-gj9pq false false Cilium agent 'cilium-w54s2': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 32 Failed 0 Cilium agent 'cilium-wl6g2': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 27 Failed 0 ```
### Standard Error
Click to show. ```stack-error 13:08:54 STEP: Installing Cilium 13:08:56 STEP: Waiting for Cilium to become ready 13:11:06 STEP: Validating if Kubernetes DNS is deployed 13:11:06 STEP: Checking if deployment is ready 13:11:06 STEP: Checking if kube-dns service is plumbed correctly 13:11:06 STEP: Checking if pods have identity 13:11:06 STEP: Checking if DNS can resolve 13:11:10 STEP: Kubernetes DNS is up and operational 13:11:10 STEP: Validating Cilium Installation 13:11:10 STEP: Performing Cilium controllers preflight check 13:11:10 STEP: Performing Cilium status preflight check 13:11:10 STEP: Performing Cilium health check 13:11:10 STEP: Checking whether host EP regenerated 13:11:17 STEP: Performing Cilium service preflight check 13:11:17 STEP: Performing K8s service preflight check 13:11:17 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-w54s2': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 13:11:17 STEP: Performing Cilium controllers preflight check 13:11:17 STEP: Performing Cilium status preflight check 13:11:17 STEP: Performing Cilium health check 13:11:17 STEP: Checking whether host EP regenerated 13:11:25 STEP: Performing Cilium service preflight check 13:11:25 STEP: Performing K8s service preflight check 13:11:25 STEP: Performing Cilium controllers preflight check 13:11:25 STEP: Performing Cilium status preflight check 13:11:25 STEP: Performing Cilium health check 13:11:25 STEP: Checking whether host EP regenerated 13:11:33 STEP: Performing Cilium service preflight check 13:11:33 STEP: Performing K8s service preflight check 13:11:33 STEP: Performing Cilium controllers preflight check 13:11:33 STEP: Performing Cilium health check 13:11:33 STEP: Checking whether host EP regenerated 13:11:33 STEP: Performing Cilium status preflight check 13:11:40 STEP: Performing Cilium service preflight check 13:11:40 STEP: Performing K8s service preflight check 13:11:46 STEP: Waiting for cilium-operator to be ready 13:11:46 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 13:11:46 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 13:11:46 STEP: Making sure all endpoints are in ready state 13:11:49 STEP: Creating namespace 202304061311k8sdatapathconfighostfirewallwithvxlan 13:11:49 STEP: Deploying demo_hostfw.yaml in namespace 202304061311k8sdatapathconfighostfirewallwithvxlan 13:11:50 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 13:11:50 STEP: WaitforNPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="") 13:12:02 STEP: WaitforNPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="") => 13:12:02 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 13:12:18 STEP: Checking host policies on egress to remote node 13:12:18 STEP: Checking host policies on egress to local pod 13:12:18 STEP: Checking host policies on ingress from local pod 13:12:18 STEP: Checking host policies on ingress from remote node 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 13:12:18 STEP: Checking host policies on ingress from remote pod 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 13:12:18 STEP: Checking host policies on egress to remote pod 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 13:12:18 STEP: WaitforPods(namespace="202304061311k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => === Test Finished at 2023-04-06T13:12:40Z==== 13:12:40 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-06T13:12:36.971013541Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 13:12:40 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202304061311k8sdatapathconfighostfirewallwithvxlan testclient-5wcfl 1/1 Running 0 56s 10.0.1.5 k8s1 202304061311k8sdatapathconfighostfirewallwithvxlan testclient-7zskz 1/1 Running 0 56s 10.0.0.243 k8s2 202304061311k8sdatapathconfighostfirewallwithvxlan testclient-host-49k85 1/1 Running 0 55s 192.168.56.11 k8s1 202304061311k8sdatapathconfighostfirewallwithvxlan testclient-host-b9wmb 1/1 Running 0 55s 192.168.56.12 k8s2 202304061311k8sdatapathconfighostfirewallwithvxlan testserver-gj9pq 2/2 Running 0 56s 10.0.0.32 k8s2 202304061311k8sdatapathconfighostfirewallwithvxlan testserver-hld4s 2/2 Running 0 56s 10.0.1.75 k8s1 202304061311k8sdatapathconfighostfirewallwithvxlan testserver-host-dqsh2 2/2 Running 0 56s 192.168.56.12 k8s2 202304061311k8sdatapathconfighostfirewallwithvxlan testserver-host-vgqcr 2/2 Running 0 56s 192.168.56.11 k8s1 cilium-monitoring grafana-585bb89877-nfkr4 0/1 Running 0 60m 10.0.0.179 k8s2 cilium-monitoring prometheus-8885c5888-mp7sl 1/1 Running 0 60m 10.0.0.251 k8s2 kube-system cilium-operator-6fb68dc8f5-nkp7n 1/1 Running 1 3m49s 192.168.56.12 k8s2 kube-system cilium-operator-6fb68dc8f5-pmv86 1/1 Running 0 3m49s 192.168.56.11 k8s1 kube-system cilium-w54s2 1/1 Running 0 3m49s 192.168.56.12 k8s2 kube-system cilium-wl6g2 1/1 Running 0 3m49s 192.168.56.11 k8s1 kube-system coredns-758664cbbf-pnzwr 1/1 Running 0 18m 10.0.0.135 k8s2 kube-system etcd-k8s1 1/1 Running 0 63m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 63m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 3 63m 192.168.56.11 k8s1 kube-system kube-proxy-8rhn5 1/1 Running 0 64m 192.168.56.11 k8s1 kube-system kube-proxy-vdc2m 1/1 Running 0 61m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 4 63m 192.168.56.11 k8s1 kube-system log-gatherer-sftnp 1/1 Running 0 60m 192.168.56.11 k8s1 kube-system log-gatherer-z8625 1/1 Running 0 60m 192.168.56.12 k8s2 kube-system registry-adder-dzwnk 1/1 Running 0 61m 192.168.56.12 k8s2 kube-system registry-adder-m2fmq 1/1 Running 0 61m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-w54s2 cilium-wl6g2] cmd: kubectl exec -n kube-system cilium-w54s2 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-ae078fdf) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 32/32 healthy Proxy Status: OK, ip 10.0.0.38, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2646/65535 (4.04%), Flows/s: 19.83 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-06T13:11:40Z) Stderr: cmd: kubectl exec -n kube-system cilium-w54s2 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 54 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 75 Disabled Disabled 4 reserved:health fd02::41 10.0.0.15 ready 1287 Disabled Disabled 11600 k8s:io.cilium.k8s.policy.cluster=default fd02::d 10.0.0.32 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304061311k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3038 Disabled Disabled 38233 k8s:io.cilium.k8s.policy.cluster=default fd02::96 10.0.0.243 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304061311k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 4013 Disabled Disabled 17139 k8s:io.cilium.k8s.policy.cluster=default fd02::f7 10.0.0.135 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns Stderr: cmd: kubectl exec -n kube-system cilium-wl6g2 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-ae078fdf) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 27/27 healthy Proxy Status: OK, ip 10.0.1.157, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 5233/65535 (7.99%), Flows/s: 41.45 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-06T13:11:46Z) Stderr: cmd: kubectl exec -n kube-system cilium-wl6g2 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 863 Disabled Disabled 4 reserved:health fd02::1f5 10.0.1.11 ready 1219 Disabled Disabled 38233 k8s:io.cilium.k8s.policy.cluster=default fd02::17b 10.0.1.5 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304061311k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 1849 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 2497 Disabled Disabled 11600 k8s:io.cilium.k8s.policy.cluster=default fd02::1c7 10.0.1.75 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304061311k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer Stderr: ===================== Exiting AfterFailed ===================== 13:13:33 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 13:13:33 STEP: Deleting deployment demo_hostfw.yaml 13:13:33 STEP: Deleting namespace 202304061311k8sdatapathconfighostfirewallwithvxlan 13:13:49 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|ee3b89ea_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//4226/artifact/316bae65_K8sAgentPolicyTest_Multi-node_policy_test_with_L7_policy_using_connectivity-check_to_check_datapath.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//4226/artifact/ee3b89ea_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//4226/artifact/test_results_Cilium-PR-K8s-1.16-kernel-4.9_4226_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9/4226/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24789 hit this flake with 95.87% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.18-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-07T12:43:41.167920939Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.18-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-04-07T12:43:41.167920939Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 1 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 ⚠️ Number of "level=warning" in logs: 6 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: Unable to restore endpoint, ignoring Key allocation attempt failed Cilium pods: [cilium-5l5dc cilium-b22lz] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress coredns-66585574f-5ndfh false false testclient-qvxr6 false false testclient-w75lq false false testserver-mqj5r false false testserver-sffm9 false false Cilium agent 'cilium-5l5dc': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 32 Failed 0 Cilium agent 'cilium-b22lz': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 27 Failed 0 ```
### Standard Error
Click to show. ```stack-error 12:41:12 STEP: Installing Cilium 12:41:14 STEP: Waiting for Cilium to become ready 12:41:38 STEP: Validating if Kubernetes DNS is deployed 12:41:38 STEP: Checking if deployment is ready 12:41:38 STEP: Checking if kube-dns service is plumbed correctly 12:41:38 STEP: Checking if pods have identity 12:41:38 STEP: Checking if DNS can resolve 12:41:44 STEP: Kubernetes DNS is not ready: 5s timeout expired 12:41:44 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 12:41:44 STEP: Waiting for Kubernetes DNS to become operational 12:41:44 STEP: Checking if deployment is ready 12:41:44 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:41:45 STEP: Checking if deployment is ready 12:41:45 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:41:46 STEP: Checking if deployment is ready 12:41:46 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:41:47 STEP: Checking if deployment is ready 12:41:47 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:41:48 STEP: Checking if deployment is ready 12:41:48 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-b22lz: unable to find service backend 10.0.1.250:53 in datapath of cilium pod cilium-b22lz 12:41:48 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:41:49 STEP: Checking if deployment is ready 12:41:49 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:41:50 STEP: Checking if deployment is ready 12:41:50 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:41:51 STEP: Checking if deployment is ready 12:41:51 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:41:52 STEP: Checking if deployment is ready 12:41:52 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:41:53 STEP: Checking if deployment is ready 12:41:53 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:41:54 STEP: Checking if deployment is ready 12:41:54 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 12:41:55 STEP: Checking if deployment is ready 12:41:55 STEP: Checking if kube-dns service is plumbed correctly 12:41:55 STEP: Checking if pods have identity 12:41:55 STEP: Checking if DNS can resolve 12:41:59 STEP: Validating Cilium Installation 12:41:59 STEP: Performing Cilium controllers preflight check 12:41:59 STEP: Performing Cilium status preflight check 12:41:59 STEP: Checking whether host EP regenerated 12:41:59 STEP: Performing Cilium health check 12:42:06 STEP: Performing Cilium service preflight check 12:42:06 STEP: Performing K8s service preflight check 12:42:06 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-5l5dc': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 12:42:06 STEP: Performing Cilium controllers preflight check 12:42:06 STEP: Checking whether host EP regenerated 12:42:06 STEP: Performing Cilium status preflight check 12:42:06 STEP: Performing Cilium health check 12:42:14 STEP: Performing Cilium service preflight check 12:42:14 STEP: Performing K8s service preflight check 12:42:14 STEP: Performing Cilium status preflight check 12:42:14 STEP: Performing Cilium health check 12:42:14 STEP: Checking whether host EP regenerated 12:42:14 STEP: Performing Cilium controllers preflight check 12:42:21 STEP: Performing Cilium service preflight check 12:42:21 STEP: Performing K8s service preflight check 12:42:21 STEP: Performing Cilium controllers preflight check 12:42:21 STEP: Performing Cilium status preflight check 12:42:21 STEP: Performing Cilium health check 12:42:21 STEP: Checking whether host EP regenerated 12:42:29 STEP: Performing Cilium service preflight check 12:42:29 STEP: Performing K8s service preflight check 12:42:29 STEP: Performing Cilium controllers preflight check 12:42:29 STEP: Performing Cilium health check 12:42:29 STEP: Performing Cilium status preflight check 12:42:29 STEP: Checking whether host EP regenerated 12:42:36 STEP: Performing Cilium service preflight check 12:42:36 STEP: Performing K8s service preflight check 12:42:36 STEP: Performing Cilium controllers preflight check 12:42:36 STEP: Performing Cilium health check 12:42:36 STEP: Checking whether host EP regenerated 12:42:36 STEP: Performing Cilium status preflight check 12:42:44 STEP: Performing Cilium service preflight check 12:42:44 STEP: Performing K8s service preflight check 12:42:50 STEP: Waiting for cilium-operator to be ready 12:42:50 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 12:42:50 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 12:42:50 STEP: Making sure all endpoints are in ready state 12:42:53 STEP: Creating namespace 202304071242k8sdatapathconfighostfirewallwithvxlan 12:42:53 STEP: Deploying demo_hostfw.yaml in namespace 202304071242k8sdatapathconfighostfirewallwithvxlan 12:42:53 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 12:42:53 STEP: WaitforNPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="") 12:43:05 STEP: WaitforNPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="") => 12:43:05 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.18-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 12:43:23 STEP: Checking host policies on egress to remote node 12:43:23 STEP: Checking host policies on ingress from local pod 12:43:23 STEP: Checking host policies on ingress from remote node 12:43:23 STEP: Checking host policies on ingress from remote pod 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:43:23 STEP: Checking host policies on egress to local pod 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:43:23 STEP: Checking host policies on egress to remote pod 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:43:23 STEP: WaitforPods(namespace="202304071242k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => === Test Finished at 2023-04-07T12:43:44Z==== 12:43:44 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-07T12:43:41.167920939Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 12:43:44 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202304071242k8sdatapathconfighostfirewallwithvxlan testclient-host-57528 1/1 Running 0 56s 192.168.56.11 k8s1 202304071242k8sdatapathconfighostfirewallwithvxlan testclient-host-tt5cz 1/1 Running 0 56s 192.168.56.12 k8s2 202304071242k8sdatapathconfighostfirewallwithvxlan testclient-qvxr6 1/1 Running 0 56s 10.0.1.230 k8s2 202304071242k8sdatapathconfighostfirewallwithvxlan testclient-w75lq 1/1 Running 0 56s 10.0.0.239 k8s1 202304071242k8sdatapathconfighostfirewallwithvxlan testserver-host-frw2t 2/2 Running 0 56s 192.168.56.12 k8s2 202304071242k8sdatapathconfighostfirewallwithvxlan testserver-host-pk274 2/2 Running 0 56s 192.168.56.11 k8s1 202304071242k8sdatapathconfighostfirewallwithvxlan testserver-mqj5r 2/2 Running 0 56s 10.0.1.157 k8s2 202304071242k8sdatapathconfighostfirewallwithvxlan testserver-sffm9 2/2 Running 0 56s 10.0.0.125 k8s1 cilium-monitoring grafana-677f4bb779-gspqk 0/1 Running 0 52m 10.0.1.195 k8s2 cilium-monitoring prometheus-579ff57bbb-nfsvb 1/1 Running 0 52m 10.0.1.218 k8s2 kube-system cilium-5l5dc 1/1 Running 0 2m35s 192.168.56.12 k8s2 kube-system cilium-b22lz 1/1 Running 0 2m35s 192.168.56.11 k8s1 kube-system cilium-operator-555f5586db-4qcgx 1/1 Running 1 2m35s 192.168.56.12 k8s2 kube-system cilium-operator-555f5586db-sx77p 1/1 Running 0 2m35s 192.168.56.11 k8s1 kube-system coredns-66585574f-5ndfh 1/1 Running 0 2m5s 10.0.1.22 k8s2 kube-system etcd-k8s1 1/1 Running 0 57m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 57m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 6 57m 192.168.56.11 k8s1 kube-system kube-proxy-gg6s2 1/1 Running 0 53m 192.168.56.12 k8s2 kube-system kube-proxy-l7w79 1/1 Running 0 57m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 5 57m 192.168.56.11 k8s1 kube-system log-gatherer-85jrp 1/1 Running 0 52m 192.168.56.11 k8s1 kube-system log-gatherer-bl2kf 1/1 Running 0 52m 192.168.56.12 k8s2 kube-system registry-adder-9rxgw 1/1 Running 0 53m 192.168.56.12 k8s2 kube-system registry-adder-bhg67 1/1 Running 0 53m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-5l5dc cilium-b22lz] cmd: kubectl exec -n kube-system cilium-5l5dc -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.18 (v1.18.20) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-40003b43) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 32/32 healthy Proxy Status: OK, ip 10.0.1.9, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 3348/65535 (5.11%), Flows/s: 22.50 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-07T12:43:32Z) Stderr: cmd: kubectl exec -n kube-system cilium-5l5dc -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 42 Disabled Disabled 32182 k8s:io.cilium.k8s.policy.cluster=default fd02::13b 10.0.1.157 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304071242k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1113 Disabled Disabled 30174 k8s:io.cilium.k8s.policy.cluster=default fd02::14c 10.0.1.230 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304071242k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 1915 Disabled Disabled 62583 k8s:io.cilium.k8s.policy.cluster=default fd02::134 10.0.1.22 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 2437 Disabled Disabled 4 reserved:health fd02::14d 10.0.1.198 ready 3426 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host Stderr: cmd: kubectl exec -n kube-system cilium-b22lz -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.18 (v1.18.20) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-40003b43) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 27/27 healthy Proxy Status: OK, ip 10.0.0.199, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 6496/65535 (9.91%), Flows/s: 47.68 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-07T12:43:41Z) Stderr: cmd: kubectl exec -n kube-system cilium-b22lz -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 24 Disabled Disabled 30174 k8s:io.cilium.k8s.policy.cluster=default fd02::99 10.0.0.239 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304071242k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 550 Disabled Disabled 4 reserved:health fd02::43 10.0.0.23 ready 3468 Disabled Disabled 32182 k8s:io.cilium.k8s.policy.cluster=default fd02::49 10.0.0.125 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304071242k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3960 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host Stderr: ===================== Exiting AfterFailed ===================== 12:43:57 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 12:43:57 STEP: Deleting deployment demo_hostfw.yaml 12:43:57 STEP: Deleting namespace 202304071242k8sdatapathconfighostfirewallwithvxlan 12:44:13 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|4fdc9848_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2621/artifact/0b9fa1af_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_TFTP_with_DNS_Proxy_port_collision_Tests_TFTP_from_DNS_Proxy_Port.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2621/artifact/1e5ebdfb_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Checks_service_accessing_itself_(hairpin_flow).zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2621/artifact/3c2c9fa9_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_with_L4_policy_Tests_NodePort_with_L4_Policy.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2621/artifact/4007b32d_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_with_L7_policy_Tests_NodePort_with_L7_Policy.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2621/artifact/4fdc9848_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2621/artifact/53f41fde_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Tests_NodePort_inside_cluster_(kube-proxy)_with_IPSec_and_externalTrafficPolicy=Local.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2621/artifact/8e3b2c05_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Tests_NodePort_inside_cluster_(kube-proxy)_.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2621/artifact/aa6feb79_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Tests_NodePort_inside_cluster_(kube-proxy)_with_externalTrafficPolicy=Local.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2621/artifact/bdc75e59_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Tests_NodePort_inside_cluster_(kube-proxy)_with_the_host_firewall_and_externalTrafficPolicy=Local.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2621/artifact/c79f900a_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Checks_in-cluster_KPR.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9//2621/artifact/test_results_Cilium-PR-K8s-1.18-kernel-4.9_2621_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.18-kernel-4.9/2621/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24789 hit this flake with 95.87% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.17-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-07T12:25:19.890471740Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.17-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-04-07T12:25:19.890471740Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 1 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 ⚠️ Number of "level=warning" in logs: 17 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 5 errors/warnings: Unable to restore endpoint, ignoring github.com/cilium/cilium/pkg/k8s/watchers/pod.go:146: watch of *v1.Pod ended with: an error on the server (\ github.com/cilium/cilium/pkg/k8s/watchers/namespace.go:63: watch of *v1.Namespace ended with: an error on the server (\ Network status error received, restarting client connections github.com/cilium/cilium/pkg/k8s/watchers/cilium_clusterwide_network_policy.go:97: watch of *v2.CiliumClusterwideNetworkPolicy ended with: an error on the server (\ Cilium pods: [cilium-59pnr cilium-kwwsz] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testserver-gjfwq false false testserver-x9wsb false false grafana-585bb89877-jjrqm false false prometheus-8885c5888-nmfwr false false coredns-6b4fc58d47-txvdq false false testclient-5krxc false false testclient-sqzf6 false false Cilium agent 'cilium-59pnr': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 37 Failed 0 Cilium agent 'cilium-kwwsz': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 33 Failed 0 ```
### Standard Error
Click to show. ```stack-error 12:23:01 STEP: Installing Cilium 12:23:03 STEP: Waiting for Cilium to become ready 12:24:14 STEP: Validating if Kubernetes DNS is deployed 12:24:14 STEP: Checking if deployment is ready 12:24:14 STEP: Checking if kube-dns service is plumbed correctly 12:24:14 STEP: Checking if pods have identity 12:24:14 STEP: Checking if DNS can resolve 12:24:18 STEP: Kubernetes DNS is up and operational 12:24:18 STEP: Validating Cilium Installation 12:24:18 STEP: Performing Cilium controllers preflight check 12:24:18 STEP: Performing Cilium health check 12:24:18 STEP: Checking whether host EP regenerated 12:24:18 STEP: Performing Cilium status preflight check 12:24:26 STEP: Performing Cilium service preflight check 12:24:26 STEP: Performing K8s service preflight check 12:24:32 STEP: Waiting for cilium-operator to be ready 12:24:32 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 12:24:32 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 12:24:32 STEP: Making sure all endpoints are in ready state 12:24:35 STEP: Creating namespace 202304071224k8sdatapathconfighostfirewallwithvxlan 12:24:35 STEP: Deploying demo_hostfw.yaml in namespace 202304071224k8sdatapathconfighostfirewallwithvxlan 12:24:35 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 12:24:35 STEP: WaitforNPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="") 12:24:47 STEP: WaitforNPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="") => 12:24:47 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.17-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 12:25:05 STEP: Checking host policies on egress to remote node 12:25:05 STEP: Checking host policies on ingress from local pod 12:25:05 STEP: Checking host policies on egress to remote pod 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:25:05 STEP: Checking host policies on ingress from remote pod 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 12:25:05 STEP: Checking host policies on egress to local pod 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:25:05 STEP: Checking host policies on ingress from remote node 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 12:25:05 STEP: WaitforPods(namespace="202304071224k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => === Test Finished at 2023-04-07T12:25:35Z==== 12:25:35 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-07T12:25:19.890471740Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 12:25:35 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202304071224k8sdatapathconfighostfirewallwithvxlan testclient-5krxc 1/1 Running 0 65s 10.0.0.157 k8s1 202304071224k8sdatapathconfighostfirewallwithvxlan testclient-host-8nsvn 1/1 Running 0 65s 192.168.56.12 k8s2 202304071224k8sdatapathconfighostfirewallwithvxlan testclient-host-xhs9m 1/1 Running 0 65s 192.168.56.11 k8s1 202304071224k8sdatapathconfighostfirewallwithvxlan testclient-sqzf6 1/1 Running 0 65s 10.0.1.186 k8s2 202304071224k8sdatapathconfighostfirewallwithvxlan testserver-gjfwq 2/2 Running 0 65s 10.0.1.92 k8s2 202304071224k8sdatapathconfighostfirewallwithvxlan testserver-host-ggrg6 2/2 Running 0 65s 192.168.56.12 k8s2 202304071224k8sdatapathconfighostfirewallwithvxlan testserver-host-vtl9z 2/2 Running 0 65s 192.168.56.11 k8s1 202304071224k8sdatapathconfighostfirewallwithvxlan testserver-x9wsb 2/2 Running 0 65s 10.0.0.145 k8s1 cilium-monitoring grafana-585bb89877-jjrqm 1/1 Running 0 15m 10.0.0.102 k8s1 cilium-monitoring prometheus-8885c5888-nmfwr 1/1 Running 0 15m 10.0.0.231 k8s1 kube-system cilium-59pnr 1/1 Running 0 2m37s 192.168.56.11 k8s1 kube-system cilium-kwwsz 1/1 Running 0 2m37s 192.168.56.12 k8s2 kube-system cilium-operator-bfd5f868-5gvkt 1/1 Running 0 2m37s 192.168.56.11 k8s1 kube-system cilium-operator-bfd5f868-wh96s 1/1 Running 1 2m37s 192.168.56.12 k8s2 kube-system coredns-6b4fc58d47-txvdq 1/1 Running 0 13m 10.0.1.190 k8s2 kube-system etcd-k8s1 1/1 Running 0 20m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 20m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 4 20m 192.168.56.11 k8s1 kube-system kube-proxy-qrvk5 1/1 Running 0 17m 192.168.56.11 k8s1 kube-system kube-proxy-wc6ww 1/1 Running 0 16m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 3 20m 192.168.56.11 k8s1 kube-system log-gatherer-8sfxw 1/1 Running 0 15m 192.168.56.11 k8s1 kube-system log-gatherer-fpnqq 1/1 Running 0 15m 192.168.56.12 k8s2 kube-system registry-adder-jj8wl 1/1 Running 0 16m 192.168.56.11 k8s1 kube-system registry-adder-w2gtm 1/1 Running 0 16m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-59pnr cilium-kwwsz] cmd: kubectl exec -n kube-system cilium-59pnr -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.17 (v1.17.17) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-40003b43) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 6/254 allocated from 10.0.0.0/24, IPv6: 6/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 37/37 healthy Proxy Status: OK, ip 10.0.0.20, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 3434/65535 (5.24%), Flows/s: 32.17 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-07T12:24:25Z) Stderr: cmd: kubectl exec -n kube-system cilium-59pnr -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 698 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 766 Disabled Disabled 3425 k8s:app=prometheus fd02::c7 10.0.0.231 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 811 Disabled Disabled 59564 k8s:io.cilium.k8s.policy.cluster=default fd02::c4 10.0.0.145 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304071224k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 891 Disabled Disabled 64367 k8s:io.cilium.k8s.policy.cluster=default fd02::2 10.0.0.157 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304071224k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 1328 Disabled Disabled 48433 k8s:app=grafana fd02::77 10.0.0.102 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 2050 Disabled Disabled 4 reserved:health fd02::67 10.0.0.191 ready Stderr: cmd: kubectl exec -n kube-system cilium-kwwsz -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.17 (v1.17.17) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-40003b43) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 33/33 healthy Proxy Status: OK, ip 10.0.1.241, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2103/65535 (3.21%), Flows/s: 19.39 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-07T12:24:31Z) Stderr: cmd: kubectl exec -n kube-system cilium-kwwsz -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 1487 Disabled Disabled 4 reserved:health fd02::141 10.0.1.42 ready 1680 Disabled Disabled 13225 k8s:io.cilium.k8s.policy.cluster=default fd02::19d 10.0.1.190 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1910 Disabled Disabled 59564 k8s:io.cilium.k8s.policy.cluster=default fd02::118 10.0.1.92 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304071224k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3348 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 regenerating k8s:status=lockdown reserved:host 3501 Disabled Disabled 64367 k8s:io.cilium.k8s.policy.cluster=default fd02::125 10.0.1.186 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304071224k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient Stderr: ===================== Exiting AfterFailed ===================== 12:25:56 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 12:25:56 STEP: Deleting deployment demo_hostfw.yaml 12:25:56 STEP: Deleting namespace 202304071224k8sdatapathconfighostfirewallwithvxlan 12:26:11 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|6160f14f_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.17-kernel-4.9//1271/artifact/6160f14f_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.17-kernel-4.9//1271/artifact/test_results_Cilium-PR-K8s-1.17-kernel-4.9_1271_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.17-kernel-4.9/1271/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
joestringer commented 1 year ago

@gandro suggested on Slack that this might be another case similar to https://github.com/cilium/cilium/pull/23334 . If that's the case, we may consider ignoring the error in CI.

maintainer-s-little-helper[bot] commented 1 year ago

PR #24821 hit this flake with 95.87% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-11T23:32:49.356819719Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-04-11T23:32:49.356819719Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 1 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 ⚠️ Number of "level=warning" in logs: 6 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 2 errors/warnings: Unable to restore endpoint, ignoring Key allocation attempt failed Cilium pods: [cilium-6qxjn cilium-hj42l] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-vkldt false false testserver-fxhmx false false testserver-pbgvz false false coredns-bb76b858c-hzwhb false false testclient-tz64d false false Cilium agent 'cilium-6qxjn': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 Cilium agent 'cilium-hj42l': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 32 Failed 0 ```
### Standard Error
Click to show. ```stack-error 23:30:31 STEP: Installing Cilium 23:30:33 STEP: Waiting for Cilium to become ready 23:31:09 STEP: Validating if Kubernetes DNS is deployed 23:31:09 STEP: Checking if deployment is ready 23:31:09 STEP: Checking if kube-dns service is plumbed correctly 23:31:09 STEP: Checking if pods have identity 23:31:09 STEP: Checking if DNS can resolve 23:31:13 STEP: Kubernetes DNS is up and operational 23:31:13 STEP: Validating Cilium Installation 23:31:13 STEP: Performing Cilium controllers preflight check 23:31:13 STEP: Performing Cilium health check 23:31:13 STEP: Checking whether host EP regenerated 23:31:13 STEP: Performing Cilium status preflight check 23:31:21 STEP: Performing Cilium service preflight check 23:31:21 STEP: Performing K8s service preflight check 23:31:21 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-hj42l': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 23:31:21 STEP: Performing Cilium controllers preflight check 23:31:21 STEP: Performing Cilium health check 23:31:21 STEP: Performing Cilium status preflight check 23:31:21 STEP: Checking whether host EP regenerated 23:31:29 STEP: Performing Cilium service preflight check 23:31:29 STEP: Performing K8s service preflight check 23:31:30 STEP: Performing Cilium status preflight check 23:31:30 STEP: Performing Cilium health check 23:31:30 STEP: Performing Cilium controllers preflight check 23:31:30 STEP: Checking whether host EP regenerated 23:31:37 STEP: Performing Cilium service preflight check 23:31:37 STEP: Performing K8s service preflight check 23:31:38 STEP: Performing Cilium controllers preflight check 23:31:38 STEP: Checking whether host EP regenerated 23:31:38 STEP: Performing Cilium health check 23:31:38 STEP: Performing Cilium status preflight check 23:31:46 STEP: Performing Cilium service preflight check 23:31:46 STEP: Performing K8s service preflight check 23:31:47 STEP: Performing Cilium controllers preflight check 23:31:47 STEP: Performing Cilium health check 23:31:47 STEP: Performing Cilium status preflight check 23:31:47 STEP: Checking whether host EP regenerated 23:31:54 STEP: Performing Cilium service preflight check 23:31:54 STEP: Performing K8s service preflight check 23:32:00 STEP: Waiting for cilium-operator to be ready 23:32:00 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 23:32:01 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 23:32:01 STEP: Making sure all endpoints are in ready state 23:32:03 STEP: Creating namespace 202304112332k8sdatapathconfighostfirewallwithvxlan 23:32:03 STEP: Deploying demo_hostfw.yaml in namespace 202304112332k8sdatapathconfighostfirewallwithvxlan 23:32:04 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 23:32:04 STEP: WaitforNPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="") 23:32:17 STEP: WaitforNPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="") => 23:32:17 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.20-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 23:32:33 STEP: Checking host policies on egress to remote node 23:32:33 STEP: Checking host policies on egress to remote pod 23:32:33 STEP: Checking host policies on ingress from remote node 23:32:33 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 23:32:33 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 23:32:33 STEP: Checking host policies on ingress from local pod 23:32:33 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 23:32:33 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 23:32:33 STEP: Checking host policies on ingress from remote pod 23:32:33 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 23:32:33 STEP: Checking host policies on egress to local pod 23:32:33 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 23:32:33 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 23:32:33 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 23:32:33 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 23:32:33 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 23:32:33 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 23:32:33 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 23:32:34 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 23:32:34 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 23:32:34 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 23:32:34 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 23:32:34 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 23:32:34 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 23:32:34 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 23:32:34 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 23:32:34 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 23:32:34 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 23:32:34 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 23:32:34 STEP: WaitforPods(namespace="202304112332k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-04-11T23:32:55Z==== 23:32:55 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-11T23:32:49.356819719Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 23:32:56 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202304112332k8sdatapathconfighostfirewallwithvxlan testclient-host-9jtv8 1/1 Running 0 57s 192.168.56.12 k8s2 202304112332k8sdatapathconfighostfirewallwithvxlan testclient-host-p5m8x 1/1 Running 0 57s 192.168.56.11 k8s1 202304112332k8sdatapathconfighostfirewallwithvxlan testclient-tz64d 1/1 Running 0 57s 10.0.0.231 k8s1 202304112332k8sdatapathconfighostfirewallwithvxlan testclient-vkldt 1/1 Running 0 57s 10.0.1.71 k8s2 202304112332k8sdatapathconfighostfirewallwithvxlan testserver-fxhmx 2/2 Running 0 57s 10.0.1.33 k8s2 202304112332k8sdatapathconfighostfirewallwithvxlan testserver-host-p2dqn 2/2 Running 0 57s 192.168.56.11 k8s1 202304112332k8sdatapathconfighostfirewallwithvxlan testserver-host-z2zl7 2/2 Running 0 57s 192.168.56.12 k8s2 202304112332k8sdatapathconfighostfirewallwithvxlan testserver-pbgvz 2/2 Running 0 57s 10.0.0.34 k8s1 cilium-monitoring grafana-7ddfc74b5b-9swpx 0/1 Running 0 61m 10.0.0.183 k8s2 cilium-monitoring prometheus-669755c8c5-sj8vt 1/1 Running 0 61m 10.0.0.226 k8s2 kube-system cilium-6qxjn 1/1 Running 0 2m28s 192.168.56.12 k8s2 kube-system cilium-hj42l 1/1 Running 0 2m28s 192.168.56.11 k8s1 kube-system cilium-operator-5d547654b6-mv226 1/1 Running 0 2m28s 192.168.56.11 k8s1 kube-system cilium-operator-5d547654b6-tjt6f 1/1 Running 0 2m28s 192.168.56.12 k8s2 kube-system coredns-bb76b858c-hzwhb 1/1 Running 0 31m 10.0.0.201 k8s1 kube-system etcd-k8s1 1/1 Running 0 65m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 65m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 3 65m 192.168.56.11 k8s1 kube-system kube-proxy-7c6cf 1/1 Running 0 62m 192.168.56.12 k8s2 kube-system kube-proxy-zsjjc 1/1 Running 0 65m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 3 65m 192.168.56.11 k8s1 kube-system log-gatherer-zf625 1/1 Running 0 61m 192.168.56.12 k8s2 kube-system log-gatherer-zsz56 1/1 Running 0 61m 192.168.56.11 k8s1 kube-system registry-adder-htkbj 1/1 Running 0 62m 192.168.56.11 k8s1 kube-system registry-adder-nq5fx 1/1 Running 0 62m 192.168.56.12 k8s2 Stderr: Fetching command output from pods [cilium-6qxjn cilium-hj42l] cmd: kubectl exec -n kube-system cilium-6qxjn -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.20 (v1.20.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-201d08b1) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.1.0/24, IPv6: 4/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.1.137, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2703/65535 (4.12%), Flows/s: 19.62 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-11T23:32:54Z) Stderr: cmd: kubectl exec -n kube-system cilium-6qxjn -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 396 Disabled Disabled 12028 k8s:io.cilium.k8s.policy.cluster=default fd02::150 10.0.1.71 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304112332k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 636 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 2583 Disabled Disabled 24492 k8s:io.cilium.k8s.policy.cluster=default fd02::145 10.0.1.33 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304112332k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 2743 Disabled Disabled 4 reserved:health fd02::1e3 10.0.1.155 ready Stderr: cmd: kubectl exec -n kube-system cilium-hj42l -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.20 (v1.20.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-201d08b1) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 32/32 healthy Proxy Status: OK, ip 10.0.0.66, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 6953/65535 (10.61%), Flows/s: 50.96 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-11T23:32:51Z) Stderr: cmd: kubectl exec -n kube-system cilium-hj42l -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 306 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/control-plane k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 645 Disabled Disabled 24492 k8s:io.cilium.k8s.policy.cluster=default fd02::33 10.0.0.34 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304112332k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1442 Disabled Disabled 17865 k8s:io.cilium.k8s.policy.cluster=default fd02::37 10.0.0.201 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 2231 Disabled Disabled 12028 k8s:io.cilium.k8s.policy.cluster=default fd02::4e 10.0.0.231 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304112332k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 3672 Disabled Disabled 4 reserved:health fd02::25 10.0.0.198 ready Stderr: ===================== Exiting AfterFailed ===================== 23:33:09 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 23:33:09 STEP: Deleting deployment demo_hostfw.yaml 23:33:09 STEP: Deleting namespace 202304112332k8sdatapathconfighostfirewallwithvxlan 23:33:24 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|84aeba60_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9//1783/artifact/84aeba60_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9//1783/artifact/test_results_Cilium-PR-K8s-1.20-kernel-4.9_1783_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.20-kernel-4.9/1783/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24821 hit this flake with 95.87% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-11T23:00:26.852894256Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-04-11T23:00:26.852894256Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 1 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-f97t7 cilium-xsntb] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress testclient-5f2r7 false false testclient-klzxr false false testserver-cjqmb false false testserver-mqb2n false false grafana-7ddfc74b5b-58xng false false prometheus-669755c8c5-6khpw false false coredns-bb76b858c-4gp6g false false Cilium agent 'cilium-f97t7': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0 Cilium agent 'cilium-xsntb': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 31 Failed 0 ```
### Standard Error
Click to show. ```stack-error 22:57:54 STEP: Installing Cilium 22:57:57 STEP: Waiting for Cilium to become ready 22:58:20 STEP: Validating if Kubernetes DNS is deployed 22:58:20 STEP: Checking if deployment is ready 22:58:20 STEP: Checking if pods have identity 22:58:20 STEP: Checking if kube-dns service is plumbed correctly 22:58:20 STEP: Checking if DNS can resolve 22:58:25 STEP: Kubernetes DNS is not ready: 5s timeout expired 22:58:25 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 22:58:25 STEP: Waiting for Kubernetes DNS to become operational 22:58:25 STEP: Checking if deployment is ready 22:58:25 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 22:58:26 STEP: Checking if deployment is ready 22:58:26 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 22:58:27 STEP: Checking if deployment is ready 22:58:27 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 22:58:28 STEP: Checking if deployment is ready 22:58:28 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 22:58:29 STEP: Checking if deployment is ready 22:58:29 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 22:58:30 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-xsntb: unable to find service backend 10.0.1.190:53 in datapath of cilium pod cilium-xsntb 22:58:30 STEP: Checking if deployment is ready 22:58:30 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 22:58:31 STEP: Checking if deployment is ready 22:58:31 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 22:58:32 STEP: Checking if deployment is ready 22:58:32 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 22:58:33 STEP: Checking if deployment is ready 22:58:33 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 22:58:34 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-f97t7: unable to find service backend 10.0.1.190:53 in datapath of cilium pod cilium-f97t7 22:58:34 STEP: Checking if deployment is ready 22:58:34 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 22:58:35 STEP: Checking if deployment is ready 22:58:35 STEP: Checking if kube-dns service is plumbed correctly 22:58:35 STEP: Checking if pods have identity 22:58:35 STEP: Checking if DNS can resolve 22:58:39 STEP: Validating Cilium Installation 22:58:39 STEP: Performing Cilium controllers preflight check 22:58:39 STEP: Performing Cilium health check 22:58:39 STEP: Checking whether host EP regenerated 22:58:39 STEP: Performing Cilium status preflight check 22:58:47 STEP: Performing Cilium service preflight check 22:58:47 STEP: Performing K8s service preflight check 22:58:48 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-xsntb': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 22:58:48 STEP: Performing Cilium status preflight check 22:58:48 STEP: Performing Cilium health check 22:58:48 STEP: Performing Cilium controllers preflight check 22:58:48 STEP: Checking whether host EP regenerated 22:58:55 STEP: Performing Cilium service preflight check 22:58:55 STEP: Performing K8s service preflight check 22:58:56 STEP: Performing Cilium status preflight check 22:58:56 STEP: Performing Cilium controllers preflight check 22:58:56 STEP: Checking whether host EP regenerated 22:58:56 STEP: Performing Cilium health check 22:59:04 STEP: Performing Cilium service preflight check 22:59:04 STEP: Performing K8s service preflight check 22:59:05 STEP: Performing Cilium controllers preflight check 22:59:05 STEP: Performing Cilium health check 22:59:05 STEP: Checking whether host EP regenerated 22:59:05 STEP: Performing Cilium status preflight check 22:59:12 STEP: Performing Cilium service preflight check 22:59:12 STEP: Performing K8s service preflight check 22:59:13 STEP: Performing Cilium status preflight check 22:59:13 STEP: Performing Cilium health check 22:59:13 STEP: Performing Cilium controllers preflight check 22:59:13 STEP: Checking whether host EP regenerated 22:59:21 STEP: Performing Cilium service preflight check 22:59:21 STEP: Performing K8s service preflight check 22:59:22 STEP: Performing Cilium controllers preflight check 22:59:22 STEP: Checking whether host EP regenerated 22:59:22 STEP: Performing Cilium status preflight check 22:59:22 STEP: Performing Cilium health check 22:59:29 STEP: Performing Cilium service preflight check 22:59:29 STEP: Performing K8s service preflight check 22:59:35 STEP: Waiting for cilium-operator to be ready 22:59:35 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 22:59:35 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 22:59:35 STEP: Making sure all endpoints are in ready state 22:59:38 STEP: Creating namespace 202304112259k8sdatapathconfighostfirewallwithvxlan 22:59:38 STEP: Deploying demo_hostfw.yaml in namespace 202304112259k8sdatapathconfighostfirewallwithvxlan 22:59:38 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 22:59:38 STEP: WaitforNPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="") 22:59:50 STEP: WaitforNPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="") => 22:59:50 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 23:00:07 STEP: Checking host policies on egress to remote node 23:00:07 STEP: Checking host policies on ingress from remote pod 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 23:00:07 STEP: Checking host policies on egress to local pod 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 23:00:07 STEP: Checking host policies on ingress from remote node 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 23:00:07 STEP: Checking host policies on ingress from local pod 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 23:00:07 STEP: Checking host policies on egress to remote pod 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 23:00:07 STEP: WaitforPods(namespace="202304112259k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-04-11T23:00:29Z==== 23:00:29 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-11T23:00:26.852894256Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 23:00:29 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202304112259k8sdatapathconfighostfirewallwithvxlan testclient-5f2r7 1/1 Running 0 56s 10.0.1.190 k8s2 202304112259k8sdatapathconfighostfirewallwithvxlan testclient-host-5kpkt 1/1 Running 0 56s 192.168.56.11 k8s1 202304112259k8sdatapathconfighostfirewallwithvxlan testclient-host-c2hjd 1/1 Running 0 56s 192.168.56.12 k8s2 202304112259k8sdatapathconfighostfirewallwithvxlan testclient-klzxr 1/1 Running 0 56s 10.0.0.7 k8s1 202304112259k8sdatapathconfighostfirewallwithvxlan testserver-cjqmb 2/2 Running 0 56s 10.0.1.201 k8s2 202304112259k8sdatapathconfighostfirewallwithvxlan testserver-host-lpctp 2/2 Running 0 56s 192.168.56.12 k8s2 202304112259k8sdatapathconfighostfirewallwithvxlan testserver-host-r87pr 2/2 Running 0 56s 192.168.56.11 k8s1 202304112259k8sdatapathconfighostfirewallwithvxlan testserver-mqb2n 2/2 Running 0 56s 10.0.0.73 k8s1 cilium-monitoring grafana-7ddfc74b5b-58xng 1/1 Running 0 28m 10.0.1.145 k8s2 cilium-monitoring prometheus-669755c8c5-6khpw 1/1 Running 0 28m 10.0.1.49 k8s2 kube-system cilium-f97t7 1/1 Running 0 2m37s 192.168.56.12 k8s2 kube-system cilium-operator-5d547654b6-9qbsh 1/1 Running 0 2m37s 192.168.56.11 k8s1 kube-system cilium-operator-5d547654b6-hv7gp 1/1 Running 0 2m37s 192.168.56.12 k8s2 kube-system cilium-xsntb 1/1 Running 0 2m37s 192.168.56.11 k8s1 kube-system coredns-bb76b858c-4gp6g 1/1 Running 0 2m9s 10.0.0.20 k8s1 kube-system etcd-k8s1 1/1 Running 0 32m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 32m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 4 32m 192.168.56.11 k8s1 kube-system kube-proxy-fvlds 1/1 Running 0 29m 192.168.56.12 k8s2 kube-system kube-proxy-vvnx4 1/1 Running 0 29m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 4 32m 192.168.56.11 k8s1 kube-system log-gatherer-8qnkp 1/1 Running 0 28m 192.168.56.11 k8s1 kube-system log-gatherer-n9q9d 1/1 Running 0 28m 192.168.56.12 k8s2 kube-system registry-adder-6fdcd 1/1 Running 0 29m 192.168.56.12 k8s2 kube-system registry-adder-fnxzw 1/1 Running 0 29m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-f97t7 cilium-xsntb] cmd: kubectl exec -n kube-system cilium-f97t7 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.19 (v1.19.16) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-201d08b1) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 6/254 allocated from 10.0.1.0/24, IPv6: 6/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 38/38 healthy Proxy Status: OK, ip 10.0.1.94, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 3334/65535 (5.09%), Flows/s: 24.11 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-11T22:59:28Z) Stderr: cmd: kubectl exec -n kube-system cilium-f97t7 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 119 Disabled Disabled 10407 k8s:app=prometheus fd02::114 10.0.1.49 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 270 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 363 Disabled Disabled 32763 k8s:io.cilium.k8s.policy.cluster=default fd02::129 10.0.1.201 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304112259k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 365 Disabled Disabled 4 reserved:health fd02::1c5 10.0.1.128 ready 1453 Disabled Disabled 14688 k8s:io.cilium.k8s.policy.cluster=default fd02::1af 10.0.1.190 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304112259k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2827 Disabled Disabled 23163 k8s:app=grafana fd02::1e5 10.0.1.145 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring Stderr: cmd: kubectl exec -n kube-system cilium-xsntb -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.19 (v1.19.16) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-201d08b1) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.0.0/24, IPv6: 5/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 31/31 healthy Proxy Status: OK, ip 10.0.0.240, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 7980/65535 (12.18%), Flows/s: 56.39 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-11T22:59:35Z) Stderr: cmd: kubectl exec -n kube-system cilium-xsntb -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 209 Disabled Disabled 12789 k8s:io.cilium.k8s.policy.cluster=default fd02::38 10.0.0.20 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1076 Disabled Disabled 32763 k8s:io.cilium.k8s.policy.cluster=default fd02::d0 10.0.0.73 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304112259k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1985 Disabled Disabled 14688 k8s:io.cilium.k8s.policy.cluster=default fd02::30 10.0.0.7 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304112259k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2344 Disabled Disabled 4 reserved:health fd02::67 10.0.0.230 ready 2599 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host Stderr: ===================== Exiting AfterFailed ===================== 23:00:43 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 23:00:43 STEP: Deleting deployment demo_hostfw.yaml 23:00:43 STEP: Deleting namespace 202304112259k8sdatapathconfighostfirewallwithvxlan 23:00:58 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|54194efd_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//2788/artifact/54194efd_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//2788/artifact/985bc4dd_K8sDatapathLRPTests_Checks_local_redirect_policy.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//2788/artifact/test_results_Cilium-PR-K8s-1.19-kernel-4.9_2788_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9/2788/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24831 hit this flake with 95.87% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-13T10:40:30.809243292Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-04-13T10:40:30.809243292Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 1 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-njbbv cilium-wtw6q] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress prometheus-669755c8c5-mnfm9 false false coredns-bb76b858c-b4jjr false false testclient-b5dft false false testclient-x96pl false false testserver-gkw8d false false testserver-kx76c false false grafana-7ddfc74b5b-gdwdc false false Cilium agent 'cilium-njbbv': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 28 Failed 0 Cilium agent 'cilium-wtw6q': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0 ```
### Standard Error
Click to show. ```stack-error 10:38:11 STEP: Installing Cilium 10:38:13 STEP: Waiting for Cilium to become ready 10:38:59 STEP: Validating if Kubernetes DNS is deployed 10:38:59 STEP: Checking if deployment is ready 10:38:59 STEP: Checking if kube-dns service is plumbed correctly 10:38:59 STEP: Checking if DNS can resolve 10:38:59 STEP: Checking if pods have identity 10:39:03 STEP: Kubernetes DNS is up and operational 10:39:03 STEP: Validating Cilium Installation 10:39:03 STEP: Performing Cilium controllers preflight check 10:39:03 STEP: Performing Cilium health check 10:39:03 STEP: Performing Cilium status preflight check 10:39:03 STEP: Checking whether host EP regenerated 10:39:11 STEP: Performing Cilium service preflight check 10:39:11 STEP: Performing K8s service preflight check 10:39:11 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-njbbv': Exitcode: 1 Err: exit status 1 Stdout: Stderr: Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory command terminated with exit code 1 10:39:11 STEP: Performing Cilium status preflight check 10:39:11 STEP: Performing Cilium health check 10:39:11 STEP: Checking whether host EP regenerated 10:39:11 STEP: Performing Cilium controllers preflight check 10:39:18 STEP: Performing Cilium service preflight check 10:39:18 STEP: Performing K8s service preflight check 10:39:18 STEP: Performing Cilium controllers preflight check 10:39:18 STEP: Performing Cilium health check 10:39:18 STEP: Checking whether host EP regenerated 10:39:18 STEP: Performing Cilium status preflight check 10:39:26 STEP: Performing Cilium service preflight check 10:39:26 STEP: Performing K8s service preflight check 10:39:26 STEP: Performing Cilium controllers preflight check 10:39:26 STEP: Performing Cilium status preflight check 10:39:26 STEP: Performing Cilium health check 10:39:26 STEP: Checking whether host EP regenerated 10:39:33 STEP: Performing Cilium service preflight check 10:39:33 STEP: Performing K8s service preflight check 10:39:33 STEP: Performing Cilium controllers preflight check 10:39:33 STEP: Performing Cilium health check 10:39:33 STEP: Checking whether host EP regenerated 10:39:33 STEP: Performing Cilium status preflight check 10:39:41 STEP: Performing Cilium service preflight check 10:39:41 STEP: Performing K8s service preflight check 10:39:47 STEP: Waiting for cilium-operator to be ready 10:39:47 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 10:39:47 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 10:39:47 STEP: Making sure all endpoints are in ready state 10:39:50 STEP: Creating namespace 202304131039k8sdatapathconfighostfirewallwithvxlan 10:39:50 STEP: Deploying demo_hostfw.yaml in namespace 202304131039k8sdatapathconfighostfirewallwithvxlan 10:39:50 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 10:39:50 STEP: WaitforNPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="") 10:40:02 STEP: WaitforNPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="") => 10:40:02 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.19-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 10:40:20 STEP: Checking host policies on egress to remote pod 10:40:20 STEP: Checking host policies on ingress from remote pod 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 10:40:20 STEP: Checking host policies on egress to remote node 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 10:40:20 STEP: Checking host policies on ingress from remote node 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 10:40:20 STEP: Checking host policies on egress to local pod 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 10:40:20 STEP: Checking host policies on ingress from local pod 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 10:40:20 STEP: WaitforPods(namespace="202304131039k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-04-13T10:40:42Z==== 10:40:42 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-13T10:40:30.809243292Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 10:40:42 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202304131039k8sdatapathconfighostfirewallwithvxlan testclient-b5dft 1/1 Running 0 57s 10.0.1.170 k8s1 202304131039k8sdatapathconfighostfirewallwithvxlan testclient-host-lkh26 1/1 Running 0 57s 192.168.56.12 k8s2 202304131039k8sdatapathconfighostfirewallwithvxlan testclient-host-z7rck 1/1 Running 0 57s 192.168.56.11 k8s1 202304131039k8sdatapathconfighostfirewallwithvxlan testclient-x96pl 1/1 Running 0 57s 10.0.0.235 k8s2 202304131039k8sdatapathconfighostfirewallwithvxlan testserver-gkw8d 2/2 Running 0 57s 10.0.1.125 k8s1 202304131039k8sdatapathconfighostfirewallwithvxlan testserver-host-8fv95 2/2 Running 0 57s 192.168.56.11 k8s1 202304131039k8sdatapathconfighostfirewallwithvxlan testserver-host-dz8x4 2/2 Running 0 57s 192.168.56.12 k8s2 202304131039k8sdatapathconfighostfirewallwithvxlan testserver-kx76c 2/2 Running 0 57s 10.0.0.25 k8s2 cilium-monitoring grafana-7ddfc74b5b-gdwdc 1/1 Running 0 21m 10.0.1.77 k8s1 cilium-monitoring prometheus-669755c8c5-mnfm9 1/1 Running 0 21m 10.0.1.86 k8s1 kube-system cilium-njbbv 1/1 Running 0 2m34s 192.168.56.12 k8s2 kube-system cilium-operator-6f5b6f5864-2pxq9 1/1 Running 0 2m34s 192.168.56.11 k8s1 kube-system cilium-operator-6f5b6f5864-djp2g 1/1 Running 0 2m34s 192.168.56.12 k8s2 kube-system cilium-wtw6q 1/1 Running 0 2m34s 192.168.56.11 k8s1 kube-system coredns-bb76b858c-b4jjr 1/1 Running 0 18m 10.0.1.41 k8s1 kube-system etcd-k8s1 1/1 Running 0 25m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 25m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 3 25m 192.168.56.11 k8s1 kube-system kube-proxy-6gdwk 1/1 Running 0 22m 192.168.56.12 k8s2 kube-system kube-proxy-gnb42 1/1 Running 0 23m 192.168.56.11 k8s1 kube-system kube-scheduler-k8s1 1/1 Running 3 25m 192.168.56.11 k8s1 kube-system log-gatherer-2mgjl 1/1 Running 0 22m 192.168.56.11 k8s1 kube-system log-gatherer-nfsq5 1/1 Running 0 22m 192.168.56.12 k8s2 kube-system registry-adder-2t7f5 1/1 Running 0 22m 192.168.56.12 k8s2 kube-system registry-adder-fkchg 1/1 Running 0 22m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-njbbv cilium-wtw6q] cmd: kubectl exec -n kube-system cilium-njbbv -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.19 (v1.19.16) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-3fb3b1ee) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 28/28 healthy Proxy Status: OK, ip 10.0.0.80, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2670/65535 (4.07%), Flows/s: 19.25 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-13T10:40:32Z) Stderr: cmd: kubectl exec -n kube-system cilium-njbbv -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 890 Disabled Disabled 4 reserved:health fd02::78 10.0.0.98 ready 1346 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 1402 Disabled Disabled 17065 k8s:io.cilium.k8s.policy.cluster=default fd02::95 10.0.0.25 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304131039k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3953 Disabled Disabled 48950 k8s:io.cilium.k8s.policy.cluster=default fd02::a3 10.0.0.235 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304131039k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient Stderr: cmd: kubectl exec -n kube-system cilium-wtw6q -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.19 (v1.19.16) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1beta1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-3fb3b1ee) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 7/254 allocated from 10.0.1.0/24, IPv6: 7/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 42/42 healthy Proxy Status: OK, ip 10.0.1.43, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 6277/65535 (9.58%), Flows/s: 47.73 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-13T10:40:47Z) Stderr: cmd: kubectl exec -n kube-system cilium-wtw6q -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 197 Disabled Disabled 4 reserved:health fd02::10c 10.0.1.117 ready 977 Disabled Disabled 48950 k8s:io.cilium.k8s.policy.cluster=default fd02::147 10.0.1.170 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304131039k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 989 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 1215 Disabled Disabled 17065 k8s:io.cilium.k8s.policy.cluster=default fd02::119 10.0.1.125 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304131039k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 1517 Disabled Disabled 24726 k8s:app=grafana fd02::1fc 10.0.1.77 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 3690 Disabled Disabled 1271 k8s:io.cilium.k8s.policy.cluster=default fd02::107 10.0.1.41 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 3983 Disabled Disabled 40044 k8s:app=prometheus fd02::132 10.0.1.86 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring Stderr: ===================== Exiting AfterFailed ===================== 10:40:56 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 10:40:56 STEP: Deleting deployment demo_hostfw.yaml 10:40:56 STEP: Deleting namespace 202304131039k8sdatapathconfighostfirewallwithvxlan 10:41:11 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|3c30b6d2_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//2800/artifact/3c30b6d2_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9//2800/artifact/test_results_Cilium-PR-K8s-1.19-kernel-4.9_2800_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.19-kernel-4.9/2800/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
maintainer-s-little-helper[bot] commented 1 year ago

PR #24849 hit this flake with 97.53% similarity:

Click to show. ### Test Name ```test-name K8sDatapathConfig Host firewall With VXLAN ``` ### Failure Output ```failure-output FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: ``` ### Stacktrace
Click to show. ```stack-trace /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:415 Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-13T18:10:18.333894624Z level=error msg="Interrupt received" subsys=hive /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:413 ```
### Standard Output
Click to show. ```stack-output Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 No errors/warnings found in logs ⚠️ Found "2023-04-13T18:10:18.333894624Z level=error msg=\"Interrupt received\" subsys=hive" in logs 1 times Number of "context deadline exceeded" in logs: 0 Number of "level=error" in logs: 1 Number of "level=warning" in logs: 0 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Interrupt received Number of "context deadline exceeded" in logs: 4 Number of "level=error" in logs: 0 Number of "level=warning" in logs: 5 Number of "Cilium API handler panicked" in logs: 0 Number of "Goroutine took lock for more than" in logs: 0 Top 1 errors/warnings: Unable to restore endpoint, ignoring Cilium pods: [cilium-2rnh6 cilium-2xp9v] Netpols loaded: CiliumNetworkPolicies loaded: Endpoint Policy Enforcement: Pod Ingress Egress coredns-758664cbbf-fnjfp false false testclient-bt7p9 false false testclient-zdm7p false false testserver-vh2v6 false false testserver-z7vsj false false grafana-585bb89877-q6clg false false prometheus-8885c5888-k4ptn false false Cilium agent 'cilium-2rnh6': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 32 Failed 0 Cilium agent 'cilium-2xp9v': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 37 Failed 0 ```
### Standard Error
Click to show. ```stack-error 18:07:20 STEP: Installing Cilium 18:07:23 STEP: Waiting for Cilium to become ready 18:09:03 STEP: Validating if Kubernetes DNS is deployed 18:09:03 STEP: Checking if deployment is ready 18:09:03 STEP: Checking if kube-dns service is plumbed correctly 18:09:03 STEP: Checking if pods have identity 18:09:03 STEP: Checking if DNS can resolve 18:09:08 STEP: Kubernetes DNS is not ready: 5s timeout expired 18:09:08 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns) 18:09:08 STEP: Waiting for Kubernetes DNS to become operational 18:09:08 STEP: Checking if deployment is ready 18:09:09 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:09:09 STEP: Checking if deployment is ready 18:09:10 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:09:10 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-2xp9v: unable to find service backend 10.0.1.159:53 in datapath of cilium pod cilium-2xp9v 18:09:10 STEP: Checking if deployment is ready 18:09:11 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:09:11 STEP: Checking if deployment is ready 18:09:12 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:09:12 STEP: Checking if deployment is ready 18:09:13 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:09:13 STEP: Checking if deployment is ready 18:09:14 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:09:14 STEP: Checking if deployment is ready 18:09:15 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:09:15 STEP: Checking if deployment is ready 18:09:16 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:09:16 STEP: Checking if deployment is ready 18:09:17 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:09:17 STEP: Checking if deployment is ready 18:09:18 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:09:18 STEP: Checking if deployment is ready 18:09:19 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available 18:09:19 STEP: Checking if deployment is ready 18:09:20 STEP: Checking if kube-dns service is plumbed correctly 18:09:20 STEP: Checking if pods have identity 18:09:20 STEP: Checking if DNS can resolve 18:09:23 STEP: Validating Cilium Installation 18:09:23 STEP: Performing Cilium controllers preflight check 18:09:23 STEP: Checking whether host EP regenerated 18:09:23 STEP: Performing Cilium status preflight check 18:09:23 STEP: Performing Cilium health check 18:09:31 STEP: Performing Cilium service preflight check 18:09:31 STEP: Performing K8s service preflight check 18:09:37 STEP: Waiting for cilium-operator to be ready 18:09:37 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") 18:09:37 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => 18:09:37 STEP: Making sure all endpoints are in ready state 18:09:40 STEP: Creating namespace 202304131809k8sdatapathconfighostfirewallwithvxlan 18:09:40 STEP: Deploying demo_hostfw.yaml in namespace 202304131809k8sdatapathconfighostfirewallwithvxlan 18:09:40 STEP: Waiting for 4m0s for 8 pods of deployment demo_hostfw.yaml to become ready 18:09:40 STEP: WaitforNPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="") 18:09:48 STEP: WaitforNPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="") => 18:09:48 STEP: Applying policies /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.9/src/github.com/cilium/cilium/test/k8s/manifests/host-policies.yaml 18:10:05 STEP: Checking host policies on egress to remote node 18:10:05 STEP: Checking host policies on ingress from remote pod 18:10:05 STEP: Checking host policies on egress to remote pod 18:10:05 STEP: Checking host policies on ingress from remote node 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 18:10:05 STEP: Checking host policies on egress to local pod 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 18:10:05 STEP: Checking host policies on ingress from local pod 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClient") => 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testClientHost") => 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServer") => 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => 18:10:05 STEP: WaitforPods(namespace="202304131809k8sdatapathconfighostfirewallwithvxlan", filter="-l zgroup=testServerHost") => === Test Finished at 2023-04-13T18:10:26Z==== 18:10:26 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig FAIL: Found 1 io.cilium/app=operator logs matching list of errors that must be investigated: 2023-04-13T18:10:18.333894624Z level=error msg="Interrupt received" subsys=hive ===================== TEST FAILED ===================== 18:10:26 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig cmd: kubectl get pods -o wide --all-namespaces Exitcode: 0 Stdout: NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 202304131809k8sdatapathconfighostfirewallwithvxlan testclient-bt7p9 1/1 Running 0 51s 10.0.1.56 k8s2 202304131809k8sdatapathconfighostfirewallwithvxlan testclient-host-7g72d 1/1 Running 0 51s 192.168.56.12 k8s2 202304131809k8sdatapathconfighostfirewallwithvxlan testclient-host-b95jc 1/1 Running 0 51s 192.168.56.11 k8s1 202304131809k8sdatapathconfighostfirewallwithvxlan testclient-zdm7p 1/1 Running 0 51s 10.0.0.46 k8s1 202304131809k8sdatapathconfighostfirewallwithvxlan testserver-host-5tqtb 2/2 Running 0 51s 192.168.56.11 k8s1 202304131809k8sdatapathconfighostfirewallwithvxlan testserver-host-h7trb 2/2 Running 0 51s 192.168.56.12 k8s2 202304131809k8sdatapathconfighostfirewallwithvxlan testserver-vh2v6 2/2 Running 0 51s 10.0.0.137 k8s1 202304131809k8sdatapathconfighostfirewallwithvxlan testserver-z7vsj 2/2 Running 0 51s 10.0.1.189 k8s2 cilium-monitoring grafana-585bb89877-q6clg 1/1 Running 0 31m 10.0.0.176 k8s1 cilium-monitoring prometheus-8885c5888-k4ptn 1/1 Running 0 31m 10.0.0.158 k8s1 kube-system cilium-2rnh6 1/1 Running 0 3m8s 192.168.56.12 k8s2 kube-system cilium-2xp9v 1/1 Running 0 3m8s 192.168.56.11 k8s1 kube-system cilium-operator-6447d97956-54vlp 1/1 Running 0 3m8s 192.168.56.12 k8s2 kube-system cilium-operator-6447d97956-jw96s 1/1 Running 0 3m8s 192.168.56.11 k8s1 kube-system coredns-758664cbbf-fnjfp 1/1 Running 0 83s 10.0.1.50 k8s2 kube-system etcd-k8s1 1/1 Running 0 34m 192.168.56.11 k8s1 kube-system kube-apiserver-k8s1 1/1 Running 0 35m 192.168.56.11 k8s1 kube-system kube-controller-manager-k8s1 1/1 Running 4 35m 192.168.56.11 k8s1 kube-system kube-proxy-nkgnj 1/1 Running 0 33m 192.168.56.11 k8s1 kube-system kube-proxy-p7bdh 1/1 Running 0 32m 192.168.56.12 k8s2 kube-system kube-scheduler-k8s1 1/1 Running 4 35m 192.168.56.11 k8s1 kube-system log-gatherer-cdtx2 1/1 Running 0 31m 192.168.56.11 k8s1 kube-system log-gatherer-krptd 1/1 Running 0 31m 192.168.56.12 k8s2 kube-system registry-adder-9kpf6 1/1 Running 0 32m 192.168.56.12 k8s2 kube-system registry-adder-g6rtr 1/1 Running 0 32m 192.168.56.11 k8s1 Stderr: Fetching command output from pods [cilium-2rnh6 cilium-2xp9v] cmd: kubectl exec -n kube-system cilium-2rnh6 -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-85714f8d) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 5/254 allocated from 10.0.1.0/24, IPv6: 5/254 allocated from fd02::100/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 32/32 healthy Proxy Status: OK, ip 10.0.1.160, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 2164/65535 (3.30%), Flows/s: 14.82 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-13T18:09:30Z) Stderr: cmd: kubectl exec -n kube-system cilium-2rnh6 -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 416 Disabled Disabled 4 reserved:health fd02::1c9 10.0.1.89 ready 550 Disabled Disabled 35691 k8s:io.cilium.k8s.policy.cluster=default fd02::1d4 10.0.1.50 ready k8s:io.cilium.k8s.policy.serviceaccount=coredns k8s:io.kubernetes.pod.namespace=kube-system k8s:k8s-app=kube-dns 1027 Disabled Disabled 19472 k8s:io.cilium.k8s.policy.cluster=default fd02::114 10.0.1.56 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304131809k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 2399 Enabled Enabled 1 k8s:cilium.io/ci-node=k8s2 ready k8s:status=lockdown reserved:host 2568 Disabled Disabled 15475 k8s:io.cilium.k8s.policy.cluster=default fd02::1ce 10.0.1.189 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304131809k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer Stderr: cmd: kubectl exec -n kube-system cilium-2xp9v -c cilium-agent -- cilium status Exitcode: 0 Stdout: KVStore: Ok Disabled Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Disabled Host firewall: Enabled [enp0s16, enp0s3, enp0s8] CNI Chaining: none CNI Config file: CNI configuration file management disabled Cilium: Ok 1.13.1 (v1.13.1-85714f8d) NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 6/254 allocated from 10.0.0.0/24, IPv6: 6/254 allocated from fd02::/120 IPv6 BIG TCP: Disabled BandwidthManager: Disabled Host Routing: Legacy Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled] Controller Status: 37/37 healthy Proxy Status: OK, ip 10.0.0.75, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 4012/65535 (6.12%), Flows/s: 41.65 Metrics: Disabled Encryption: Disabled Cluster health: 2/2 reachable (2023-04-13T18:09:37Z) Stderr: cmd: kubectl exec -n kube-system cilium-2xp9v -c cilium-agent -- cilium endpoint list Exitcode: 0 Stdout: ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS ENFORCEMENT ENFORCEMENT 26 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready k8s:node-role.kubernetes.io/master k8s:status=lockdown reserved:host 166 Disabled Disabled 13555 k8s:app=prometheus fd02::cf 10.0.0.158 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s k8s:io.kubernetes.pod.namespace=cilium-monitoring 1894 Disabled Disabled 38722 k8s:app=grafana fd02::2f 10.0.0.176 ready k8s:io.cilium.k8s.policy.cluster=default k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=cilium-monitoring 2250 Disabled Disabled 15475 k8s:io.cilium.k8s.policy.cluster=default fd02::1 10.0.0.137 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304131809k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testServer 3808 Disabled Disabled 19472 k8s:io.cilium.k8s.policy.cluster=default fd02::97 10.0.0.46 ready k8s:io.cilium.k8s.policy.serviceaccount=default k8s:io.kubernetes.pod.namespace=202304131809k8sdatapathconfighostfirewallwithvxlan k8s:test=hostfw k8s:zgroup=testClient 3873 Disabled Disabled 4 reserved:health fd02::c1 10.0.0.205 ready Stderr: ===================== Exiting AfterFailed ===================== 18:10:39 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig 18:10:39 STEP: Deleting deployment demo_hostfw.yaml 18:10:39 STEP: Deleting namespace 202304131809k8sdatapathconfighostfirewallwithvxlan 18:10:55 STEP: Running AfterEach for block EntireTestsuite [[ATTACHMENT|b940db61_K8sDatapathConfig_Host_firewall_With_VXLAN.zip]] ```
ZIP Links:
Click to show. https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//4254/artifact/b940db61_K8sDatapathConfig_Host_firewall_With_VXLAN.zip https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9//4254/artifact/test_results_Cilium-PR-K8s-1.16-kernel-4.9_4254_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.9/4254/ If this is a duplicate of an existing flake, comment 'Duplicate of #\' and close this issue.
giorio94 commented 1 year ago

Hit very similar errors in a different test (K8sPolicyTestExtended Validate toEntities KubeAPIServer Allows connection to KubeAPIServer) in https://github.com/cilium/cilium/pull/24785:

2023-04-19T10:30:55.913277228Z level=error msg="Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body)" subsys=klog
2023-04-19T10:30:55.913395395Z error retrieving resource lock kube-system/cilium-operator-resource-lock: unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)
2023-04-19T10:30:55.913454734Z level=error msg="error retrieving resource lock kube-system/cilium-operator-resource-lock: unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" subsys=klog

Link: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next/1895/testReport/junit/Suite-k8s-1/26/K8sPolicyTestExtended_Validate_toEntities_KubeAPIServer_Allows_connection_to_KubeAPIServer/ Sysdump: e4434197_K8sPolicyTestExtended_Validate_toEntities_KubeAPIServer_Allows_connection_to_KubeAPIServer.zip

pchaigno commented 1 year ago

Closing this as it's becoming a mix of several very different flakes. We'll reopen with proper issues. Please point to the actual log message failing the test if you hit this again.