khuedoan / homelab

Fully automated homelab from empty disk to running services with a single command.
https://homelab.khuedoan.com
GNU General Public License v3.0
7.9k stars 705 forks source link

Adding a second dedicated network interface for longhorn replication #135

Open sushyad opened 5 months ago

sushyad commented 5 months ago

I am trying to add a second network interface dedicated for longhorn replication using multus cni plugin together with ipvlan. Here is my PR from my fork to give you an idea what I am trying to do: https://github.com/khuedoan/homelab/pull/134

I was able to tweak the argocd recipe to:

When I create a test pod wth two network interfaces it doesn't work and doesn't show the second interface as expected.

cat <<EOF | kubectl apply -f - 
apiVersion: v1
kind: Pod
metadata:
  name: app1
  annotations:
    k8s.v1.cni.cncf.io/networks: multus-conf
spec:
  containers:
  - name: app1
    command: ["/bin/sh", "-c", "trap : TERM INT; sleep infinity & wait"]
    image: alpine
EOF
kubectl describe pod app1

gives

bash-5.2# kubectl describe pod app1 
Name:             app1
Namespace:        default
Priority:         0
Service Account:  default
Node:             metal0/192.168.0.115
Start Time:       Fri, 26 Jan 2024 19:49:49 +0000
Labels:           <none>
Annotations:      k8s.v1.cni.cncf.io/networks: multus-conf
Status:           Running
IP:               10.0.0.176
IPs:
  IP:  10.0.0.176
Containers:
......

instead of something lke this:

$ kubectl describe pod app1
Name:             app1
Namespace:        default
Priority:         0
Service Account:  default
Node:             node2/192.168.200.175
Start Time:       Fri, 11 Aug 2023 12:28:56 +0300
Labels:           <none>
Annotations:      k8s.v1.cni.cncf.io/network-status:
                    [{
                        "name": "mynet",
                        "interface": "eth0",
                        "ips": [
                            "10.244.2.8"
                        ],
                        "mac": "86:69:28:4f:54:b3",
                        "default": true,
                        "dns": {},
                        "gateway": [
                            "10.244.2.1"
                        ]
                    },{
                        "name": "default/multus-conf",
                        "interface": "net1",
                        "ips": [
                            "192.168.200.100"
                        ],
                        "mac": "2a:1b:4d:89:66:c0",
                        "dns": {}
                    }]
                  k8s.v1.cni.cncf.io/networks: multus-conf
Status:           Running
IP:               10.244.2.8
IPs:
  IP:  10.244.2.8
Containers:
.....

Has anyone tried to do this before?

khuedoan commented 5 months ago

I don't have multiple NIC to reproduce but this is probably related https://github.com/cilium/cilium/issues/23483

khuedoan commented 5 months ago

If this feature is important to you I think you can remove Cilium and use the default k3s CNI (Flannel), which seems to work with Multus

You can reference commits before https://github.com/khuedoan/homelab/commit/9f0d389abcdabd692bd0fbb3b69e14e8f4c0b491 (install Cilium) and https://github.com/khuedoan/homelab/commit/65af4ff8e681f8750d79712edd2ac6d4c3a567aa (remove MetalLB)

The disadvantage is that you may miss out on some future features that rely on eBPF.

pandabear41 commented 5 months ago

I have reproduced this as well. Cilium features on paper are better, but they seem to lack for me vs Flannel or Calico. I reverted back to default k3s CNI with PureLB (for now) with plans to test out Calico and their eBPF feature soon.

These three major issues I faced:

khuedoan commented 4 months ago

IPv6 has a separate tracking issue https://github.com/khuedoan/homelab/issues/114

For this issue, I'm not sure if there's anything I can do on my end since I don't have or use multiple NICs. As far as I understand, there are two options:

I'll leave this issue open for now in case someone has the same use case, but there's no action for it in this project.