cilium / cilium

eBPF-based Networking, Security, and Observability
https://cilium.io
Apache License 2.0
19.26k stars 2.79k forks source link

CFP: Enable IPv6 neighbor solicitation passthrough for generic veth chaining. #32003

Open zhangzujian opened 2 months ago

zhangzujian commented 2 months ago

Cilium Feature Proposal

Enable IPv6 neighbor solicitation passthrough for generic veth chaining, like the ARP pass through mode does.

Is your proposed feature related to a problem?

When the master cni uses cluster-wide IPAM plugins, such as whereabouts, pods running on different nodes cannot communicate with each other.

Describe the feature you'd like

(Optional) Describe your proposed solution

aditighag commented 2 months ago

When the master cni uses cluster-wide IPAM plugins, such as whereabouts, pods running on different nodes cannot communicate with each other.

Can you add more details to the proposal? Did you notice packet drops in the cilium datapath?

zhangzujian commented 2 months ago

bridge + whereabouts

CNI config:

{
  "cniVersion": "0.3.1",
  "name": "bridge",
  "plugins": [
    {
      "type": "bridge",
      "bridge": "br0",
      "isDefaultGateway": true,
      "forceAddress": false,
      "ipMasq": true,
      "hairpinMode": true,
      "ipam": {
        "type": "whereabouts",
        "range": "2001::/112",
        "exclude": [
          "2001::/120"
        ]
      }
    }
  ]
}

Pods:

$ kubectl -n kube-system get pod -o wide | grep dns
coredns-76f75df574-6clx8   1/1   Running   0  6m6s    2001::101   k8s-worker
coredns-76f75df574-rw7s4   1/1   Running   0  3m35s   2001::102   k8s-control-plane

In netns of the second pod:

$ ip -c addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether d2:13:de:98:54:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2001::102/112 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::d013:deff:fe98:5402/64 scope link
       valid_lft forever preferred_lft forever
$ ping6 -c1 -w1 2001::101
PING 2001::101(2001::101) 56 data bytes
64 bytes from 2001::101: icmp_seq=1 ttl=64 time=0.295 ms

--- 2001::101 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms
$ ip -c -6 neighbor
2001::101 dev eth0 lladdr 96:ff:ef:fb:e4:0a REACHABLE
fe80::94ff:efff:fefb:e40a dev eth0 lladdr 96:ff:ef:fb:e4:0a DELAY
2001::1 dev eth0 lladdr 6a:cf:4c:09:f8:cd router REACHABLE

tcpdump result in the same pod:

$ tcpdump -i eth0 -nnve icmp6
tcpdump: listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
03:00:00.055678 d2:13:de:98:54:02 > 33:33:ff:00:01:01, ethertype IPv6 (0x86dd), length 86: (hlim 255, next-header ICMPv6 (58) payload length: 32) 2001::102 > ff02::1:ff00:101: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001::101
          source link-address option (1), length 8 (1): d2:13:de:98:54:02
03:00:00.055703 d2:13:de:98:54:02 > 33:33:ff:00:01:01, ethertype IPv6 (0x86dd), length 86: (hlim 255, next-header ICMPv6 (58) payload length: 32) 2001::102 > ff02::1:ff00:101: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001::101
          source link-address option (1), length 8 (1): d2:13:de:98:54:02
03:00:00.055756 96:ff:ef:fb:e4:0a > d2:13:de:98:54:02, ethertype IPv6 (0x86dd), length 86: (hlim 255, next-header ICMPv6 (58) payload length: 32) 2001::101 > 2001::102: [icmp6 sum ok] ICMP6, neighbor advertisement, length 32, tgt is 2001::101, Flags [solicited, override]
          destination link-address option (2), length 8 (1): 96:ff:ef:fb:e4:0a
03:00:00.055759 d2:13:de:98:54:02 > 96:ff:ef:fb:e4:0a, ethertype IPv6 (0x86dd), length 118: (flowlabel 0x3cfc6, hlim 64, next-header ICMPv6 (58) payload length: 64) 2001::102 > 2001::101: [icmp6 sum ok] ICMP6, echo request, id 28703, seq 1
03:00:00.055952 96:ff:ef:fb:e4:0a > d2:13:de:98:54:02, ethertype IPv6 (0x86dd), length 118: (flowlabel 0x99b28, hlim 64, next-header ICMPv6 (58) payload length: 64) 2001::101 > 2001::102: [icmp6 sum ok] ICMP6, echo reply, id 28703, seq 1
03:00:04.844632 d2:13:de:98:54:02 > 33:33:00:00:00:02, ethertype IPv6 (0x86dd), length 70: (hlim 255, next-header ICMPv6 (58) payload length: 16) fe80::d013:deff:fe98:5402 > ff02::2: [icmp6 sum ok] ICMP6, router solicitation, length 16
          source link-address option (1), length 8 (1): d2:13:de:98:54:02
03:00:04.844683 d2:13:de:98:54:02 > 33:33:00:00:00:02, ethertype IPv6 (0x86dd), length 70: (hlim 255, next-header ICMPv6 (58) payload length: 16) fe80::d013:deff:fe98:5402 > ff02::2: [icmp6 sum ok] ICMP6, router solicitation, length 16
          source link-address option (1), length 8 (1): d2:13:de:98:54:02
03:00:05.104531 96:ff:ef:fb:e4:0a > d2:13:de:98:54:02, ethertype IPv6 (0x86dd), length 86: (hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::94ff:efff:fefb:e40a > 2001::102: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001::102
          source link-address option (1), length 8 (1): 96:ff:ef:fb:e4:0a
03:00:05.104547 d2:13:de:98:54:02 > 96:ff:ef:fb:e4:0a, ethertype IPv6 (0x86dd), length 78: (hlim 255, next-header ICMPv6 (58) payload length: 24) 2001::102 > fe80::94ff:efff:fefb:e40a: [icmp6 sum ok] ICMP6, neighbor advertisement, length 24, tgt is 2001::102, Flags [solicited]
03:00:10.220209 d2:13:de:98:54:02 > 96:ff:ef:fb:e4:0a, ethertype IPv6 (0x86dd), length 86: (hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::d013:deff:fe98:5402 > fe80::94ff:efff:fefb:e40a: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has fe80::94ff:efff:fefb:e40a
          source link-address option (1), length 8 (1): d2:13:de:98:54:02
03:00:10.220269 96:ff:ef:fb:e4:0a > d2:13:de:98:54:02, ethertype IPv6 (0x86dd), length 78: (hlim 255, next-header ICMPv6 (58) payload length: 24) fe80::94ff:efff:fefb:e40a > fe80::d013:deff:fe98:5402: [icmp6 sum ok] ICMP6, neighbor advertisement, length 24, tgt is fe80::94ff:efff:fefb:e40a, Flags [solicited]
03:00:15.340509 96:ff:ef:fb:e4:0a > d2:13:de:98:54:02, ethertype IPv6 (0x86dd), length 86: (hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::94ff:efff:fefb:e40a > fe80::d013:deff:fe98:5402: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has fe80::d013:deff:fe98:5402
          source link-address option (1), length 8 (1): 96:ff:ef:fb:e4:0a
03:00:15.340524 d2:13:de:98:54:02 > 96:ff:ef:fb:e4:0a, ethertype IPv6 (0x86dd), length 78: (hlim 255, next-header ICMPv6 (58) payload length: 24) fe80::d013:deff:fe98:5402 > fe80::94ff:efff:fefb:e40a: [icmp6 sum ok] ICMP6, neighbor advertisement, length 24, tgt is fe80::d013:deff:fe98:5402, Flags [solicited]
^C
13 packets captured
13 packets received by filter
0 packets dropped by kernel

bridge + whereabouts + cilium generic veth chaining

Cilium config:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cni-configuration
  namespace: kube-system
data:
  cni-config: |-
    {
      "name": "generic-veth",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "bridge",
          "bridge": "br0",
          "isDefaultGateway": true,
          "forceAddress": false,
          "ipMasq": true,
          "hairpinMode": true,
          "ipam": {
            "type": "whereabouts",
            "range": "2001::/112",
            "exclude": [
              "2001::/120"
            ]
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {
            "portMappings": true
          }
        },
        {
          "type": "cilium-cni",
          "chaining-mode": "generic-veth"
        }
      ]
    }

Pods:

$ kubectl -n kube-system get pod -o wide | grep dns
coredns-76f75df574-9js97   1/1   Running   0  6m6s    2001::101   k8s-control-plane
coredns-76f75df574-cfvjn   1/1   Running   0  3m35s   2001::102   k8s-worker

In netns of the second pod:

$ ip -c addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 36:35:0d:d9:ec:7d brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2001::102/112 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::3435:dff:fed9:ec7d/64 scope link
       valid_lft forever preferred_lft forever
$ ping6 -c1 -w1 2001::101
PING 2001::101(2001::101) 56 data bytes

--- 2001::101 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

$ ip -c -6 neighbor
2001::101 dev eth0 FAILED
2001::1 dev eth0 lladdr ea:e7:76:19:42:fb REACHABLE

tcpdump result in the same pod:

$ tcpdump -i eth0 -nnve icmp6
cpdump: listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
04:17:56.893378 36:35:0d:d9:ec:7d > 33:33:ff:00:01:01, ethertype IPv6 (0x86dd), length 86: (hlim 255, next-header ICMPv6 (58) payload length: 32) 2001::102 > ff02::1:ff00:101: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001::101
          source link-address option (1), length 8 (1): 36:35:0d:d9:ec:7d
04:17:57.900520 36:35:0d:d9:ec:7d > 33:33:ff:00:01:01, ethertype IPv6 (0x86dd), length 86: (hlim 255, next-header ICMPv6 (58) payload length: 32) 2001::102 > ff02::1:ff00:101: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001::101
          source link-address option (1), length 8 (1): 36:35:0d:d9:ec:7d
04:17:58.924243 36:35:0d:d9:ec:7d > 33:33:ff:00:01:01, ethertype IPv6 (0x86dd), length 86: (hlim 255, next-header ICMPv6 (58) payload length: 32) 2001::102 > ff02::1:ff00:101: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2001::101
          source link-address option (1), length 8 (1): 36:35:0d:d9:ec:7d
^C
3 packets captured
3 packets received by filter
0 packets dropped by kernel

I created a simple patch and after applying it the ipv6 ping returned to normal:

From dcda67f4692634559991202329ce0b380a01f2e5 Mon Sep 17 00:00:00 2001
From: zhangzujian <zhangzujian.7@gmail.com>
Date: Sat, 9 Mar 2024 12:07:00 +0000
Subject: [PATCH] passthrough ndp for generic veth chaining

Signed-off-by: zhangzujian <zhangzujian.7@gmail.com>
---
 api/v1/models/endpoint_datapath_configuration.go         | 3 +++
 api/v1/openapi.yaml                                      | 3 +++
 api/v1/server/embedded_spec.go                           | 8 ++++++++
 bpf/bpf_lxc.c                                            | 2 ++
 bpf/lib/icmp6.h                                          | 4 ++++
 pkg/datapath/linux/config/config.go                      | 4 ++++
 pkg/datapath/types/config.go                             | 4 ++++
 pkg/endpoint/bpf.go                                      | 6 ++++++
 pkg/endpoint/cache.go                                    | 8 ++++++++
 pkg/testutils/endpoint.go                                | 1 +
 plugins/cilium-cni/chaining/generic-veth/generic-veth.go | 4 ++++
 11 files changed, 47 insertions(+)

diff --git a/api/v1/models/endpoint_datapath_configuration.go b/api/v1/models/endpoint_datapath_configuration.go
index 0fa6bd50b4f6..646699aeff5e 100644
--- a/api/v1/models/endpoint_datapath_configuration.go
+++ b/api/v1/models/endpoint_datapath_configuration.go
@@ -35,6 +35,9 @@ type EndpointDatapathConfiguration struct {
    // Enable ARP passthrough mode
    RequireArpPassthrough bool `json:"require-arp-passthrough,omitempty"`

+   // Enable NDP passthrough mode
+   RequireNdpPassthrough bool `json:"require-ndp-passthrough,omitempty"`
+
    // Endpoint requires a host-facing egress program to be attached to implement ingress policy and reverse NAT.
    //
    RequireEgressProg bool `json:"require-egress-prog,omitempty"`
diff --git a/api/v1/openapi.yaml b/api/v1/openapi.yaml
index 3145db4280cf..5e596e3a22d4 100644
--- a/api/v1/openapi.yaml
+++ b/api/v1/openapi.yaml
@@ -1558,6 +1558,9 @@ definitions:
       require-arp-passthrough:
         description: Enable ARP passthrough mode
         type: boolean
+      require-ndp-passthrough:
+        description: Enable NDP passthrough mode
+        type: boolean
       require-egress-prog:
         description: >
           Endpoint requires a host-facing egress program to be attached to
diff --git a/api/v1/server/embedded_spec.go b/api/v1/server/embedded_spec.go
index d20e7d51294a..b6ef94925b12 100644
--- a/api/v1/server/embedded_spec.go
+++ b/api/v1/server/embedded_spec.go
@@ -3009,6 +3009,10 @@ func init() {
           "description": "Enable ARP passthrough mode",
           "type": "boolean"
         },
+        "require-ndp-passthrough": {
+          "description": "Enable NDP passthrough mode",
+          "type": "boolean"
+        },
         "require-egress-prog": {
           "description": "Endpoint requires a host-facing egress program to be attached to implement ingress policy and reverse NAT.\n",
           "type": "boolean"
@@ -8873,6 +8877,10 @@ func init() {
           "description": "Enable ARP passthrough mode",
           "type": "boolean"
         },
+        "require-ndp-passthrough": {
+          "description": "Enable NDP passthrough mode",
+          "type": "boolean"
+        },
         "require-egress-prog": {
           "description": "Endpoint requires a host-facing egress program to be attached to implement ingress policy and reverse NAT.\n",
           "type": "boolean"
diff --git a/bpf/bpf_lxc.c b/bpf/bpf_lxc.c
index 722cd4631ac8..a6478fb897f6 100644
--- a/bpf/bpf_lxc.c
+++ b/bpf/bpf_lxc.c
@@ -769,11 +769,13 @@ static __always_inline int __tail_handle_ipv6(struct __ctx_buff *ctx,
    if (!revalidate_data_pull(ctx, &data, &data_end, &ip6))
        return DROP_INVALID;

+#ifndef ENABLE_NDP_PASSTHROUGH
    /* Handle special ICMPv6 NDP messages, and all remaining packets
     * are subjected to forwarding into the container.
     */
    if (unlikely(is_icmp6_ndp(ctx, ip6, ETH_HLEN)))
        return icmp6_ndp_handle(ctx, ETH_HLEN, METRIC_EGRESS);
+#endif

    if (unlikely(!is_valid_lxc_src_ip(ip6)))
        return DROP_INVALID_SIP;
diff --git a/bpf/lib/icmp6.h b/bpf/lib/icmp6.h
index d1d8f401b208..ec0f503d8bea 100644
--- a/bpf/lib/icmp6.h
+++ b/bpf/lib/icmp6.h
@@ -416,7 +416,11 @@ icmp6_host_handle(struct __ctx_buff *ctx, int l4_off, bool handle_ns)
        return DROP_INVALID;

    if (type == ICMP6_NS_MSG_TYPE && handle_ns)
+#ifdef ENABLE_NDP_PASSTHROUGH
+       return CTX_ACT_OK;
+#else
        return icmp6_handle_ns(ctx, ETH_HLEN, METRIC_INGRESS);
+#endif

 #ifdef ENABLE_HOST_FIREWALL
    /* When the host firewall is enabled, we drop and allow ICMPv6 messages
diff --git a/pkg/datapath/linux/config/config.go b/pkg/datapath/linux/config/config.go
index a8048f998e72..8ed6ffd1e712 100644
--- a/pkg/datapath/linux/config/config.go
+++ b/pkg/datapath/linux/config/config.go
@@ -1075,6 +1075,10 @@ func (h *HeaderfileWriter) writeTemplateConfig(fw *bufio.Writer, e datapath.Endp
        fmt.Fprint(fw, "#define ENABLE_ARP_RESPONDER 1\n")
    }

+   if e.RequireNDPPassthrough() {
+       fmt.Fprint(fw, "#define ENABLE_NDP_PASSTHROUGH 1\n")
+   }
+
    if e.ConntrackLocalLocked() {
        ctmap.WriteBPFMacros(fw, e)
    } else {
diff --git a/pkg/datapath/types/config.go b/pkg/datapath/types/config.go
index 02da185572e9..a41ae5cf66ca 100644
--- a/pkg/datapath/types/config.go
+++ b/pkg/datapath/types/config.go
@@ -55,6 +55,10 @@ type CompileTimeConfiguration interface {
    // ARP passthrough for this endpoint
    RequireARPPassthrough() bool

+   // RequireARPPassthrough returns true if the datapath must implement
+   // NDP passthrough for this endpoint
+   RequireNDPPassthrough() bool
+
    // RequireEgressProg returns true if the endpoint requires an egress
    // program attached to the InterfaceName() invoking the section
    // "to-container"
diff --git a/pkg/endpoint/bpf.go b/pkg/endpoint/bpf.go
index b5c41010e3c4..f384d89f2df4 100644
--- a/pkg/endpoint/bpf.go
+++ b/pkg/endpoint/bpf.go
@@ -1455,6 +1455,12 @@ func (e *Endpoint) RequireARPPassthrough() bool {
    return e.DatapathConfiguration.RequireArpPassthrough
 }

+// RequireNDPPassthrough returns true if the datapath must implement NDP
+// passthrough for this endpoint
+func (e *Endpoint) RequireNDPPassthrough() bool {
+   return e.DatapathConfiguration.RequireNdpPassthrough
+}
+
 // RequireEgressProg returns true if the endpoint requires bpf_lxc with section
 // "to-container" to be attached at egress on the host facing veth pair
 func (e *Endpoint) RequireEgressProg() bool {
diff --git a/pkg/endpoint/cache.go b/pkg/endpoint/cache.go
index fb0e602541f4..2260d8eb635e 100644
--- a/pkg/endpoint/cache.go
+++ b/pkg/endpoint/cache.go
@@ -35,6 +35,7 @@ type epInfoCache struct {
    ipv6                   netip.Addr
    conntrackLocal         bool
    requireARPPassthrough  bool
+   requireNDPPassthrough  bool
    requireEgressProg      bool
    requireRouting         bool
    requireEndpointRoute   bool
@@ -66,6 +67,7 @@ func (e *Endpoint) createEpInfoCache(epdir string) *epInfoCache {
        ipv6:                   e.IPv6Address(),
        conntrackLocal:         e.ConntrackLocalLocked(),
        requireARPPassthrough:  e.RequireARPPassthrough(),
+       requireNDPPassthrough:  e.RequireNDPPassthrough(),
        requireEgressProg:      e.RequireEgressProg(),
        requireRouting:         e.RequireRouting(),
        requireEndpointRoute:   e.RequireEndpointRoute(),
@@ -146,6 +148,12 @@ func (ep *epInfoCache) RequireARPPassthrough() bool {
    return ep.requireARPPassthrough
 }

+// RequireNDPPassthrough returns true if the datapath must implement NDP
+// passthrough for this endpoint
+func (ep *epInfoCache) RequireNDPPassthrough() bool {
+   return ep.requireNDPPassthrough
+}
+
 // RequireEgressProg returns true if the endpoint requires bpf_lxc with section
 // "to-container" to be attached at egress on the host facing veth pair
 func (ep *epInfoCache) RequireEgressProg() bool {
diff --git a/pkg/testutils/endpoint.go b/pkg/testutils/endpoint.go
index dea91e0a2e7f..afbbb0c069cc 100644
--- a/pkg/testutils/endpoint.go
+++ b/pkg/testutils/endpoint.go
@@ -53,6 +53,7 @@ func NewTestHostEndpoint() TestEndpoint {

 func (e *TestEndpoint) ConntrackLocalLocked() bool                  { return false }
 func (e *TestEndpoint) RequireARPPassthrough() bool                 { return false }
+func (e *TestEndpoint) RequireNDPPassthrough() bool                 { return false }
 func (e *TestEndpoint) RequireEgressProg() bool                     { return false }
 func (e *TestEndpoint) RequireRouting() bool                        { return false }
 func (e *TestEndpoint) RequireEndpointRoute() bool                  { return false }
diff --git a/plugins/cilium-cni/chaining/generic-veth/generic-veth.go b/plugins/cilium-cni/chaining/generic-veth/generic-veth.go
index c1f8ded83815..c0551c7a1c5b 100644
--- a/plugins/cilium-cni/chaining/generic-veth/generic-veth.go
+++ b/plugins/cilium-cni/chaining/generic-veth/generic-veth.go
@@ -197,6 +197,10 @@ func (f *GenericVethChainer) Add(ctx context.Context, pluginCtx chainingapi.Plug
            // the pod
            RequireArpPassthrough: true,

+           // kube-ovn requires NDP passthrough between Linux and
+           // the pod
+           RequireNdpPassthrough: true,
+
            // The route is pointing directly into the veth of the
            // pod, install a host-facing egress program to
            // implement ingress policy and to provide reverse NAT
zhangzujian commented 2 months ago

Cilium installation command:

helm install cilium cilium/cilium --wait \
        --version 1.15.4 \
        --namespace kube-system \
        --set k8sServiceHost=<API_HOST> \
        --set k8sServicePort=<API_PORT>\
        --set kubeProxyReplacement=partial \
        --set operator.replicas=1 \
        --set socketLB.enabled=true \
        --set nodePort.enabled=true \
        --set externalIPs.enabled=true \
        --set hostPort.enabled=false \
        --set routingMode=native \
        --set sessionAffinity=true \
        --set enableIPv4Masquerade=false \
        --set enableIPv6Masquerade=false \
        --set hubble.enabled=true \
        --set sctp.enabled=true \
        --set ipv4.enabled=false \
        --set ipv6.enabled=true \
        --set ipam.mode=cluster-pool \
        --set-json ipam.operator.clusterPoolIPv4PodCIDRList='["100.65.0.0/16"]' \
        --set-json ipam.operator.clusterPoolIPv6PodCIDRList='["2001::/112"]' \
        --set cni.chainingMode=generic-veth \
        --set cni.chainingTarget=bridge \
        --set cni.customConf=true \
        --set cni.configMap=cni-configuration
github-actions[bot] commented 2 weeks ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.