kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.34k stars 4.88k forks source link

minikube and multus CNI #10664

Closed slowfranklin closed 3 years ago

slowfranklin commented 3 years ago

I can't get multus CNI to work with minikube. I'm net to K8 and minikube so I'm probably missing something obvious.

I realize that this is not the appropriate place for this kind of support request, but it seems the Slack channel is not accepting new users and the user mailing list is dead. If there's any other place (IRC channel? Matrix?) to ask such questions let me know.

Steps to reproduce the issue:

  1. Start minikube:
    
    minikube start
    πŸ˜„  minikube v1.17.1 on Fedora 33
    ✨  Using the podman (experimental) driver based on existing profile
    πŸ‘  Starting control plane node minikube in cluster minikube
    πŸ”„  Restarting existing podman container for "minikube" ...
    🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
    πŸ”Ž  Verifying Kubernetes components...
    🌟  Enabled addons: storage-provisioner, default-storageclass

❗ /usr/bin/kubectl is version 1.18.2, which may have incompatibilites with Kubernetes 1.20.2. β–ͺ Want kubectl v1.20.2? Try 'minikube kubectl -- get pods -A' πŸ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

1. Install multus CNI
`kubectl apply -f images/multus-daemonset.yml`
2. Deploy a NetworkAttachmentDefinition

apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: samba-ad-dc-static-ip-conf-1 spec: config: '{ "cniVersion": "0.3.1", "name": "lan", "type": "bridge", "bridge": "br-lan", "ipam": { "type": "static", "addresses": [ {"address": "192.168.3.2/24"} ] } }'

3. Deploy a stateful-set that references the network definition:

apiVersion: apps/v1 kind: StatefulSet metadata: name: dc spec: selector: matchLabels: app: samba-ad-dc serviceName: samba-ad-dc replicas: 1 template: metadata: labels: app: samba-ad-dc annotations: k8s.v1.cni.cncf.io/networks: samba-ad-dc-static-ip-conf-1 ...



The running container doesn't have an additional IP.

Is multus actually supposed to work with minikube? Thanks for any pointers!
sharifelgamal commented 3 years ago

minikube start usually installs a CNI by default, so I suspect installing a new one on top is what's causing the issue.

Try running minikube start --cni=images/multus-daemonset.yml or whatever the path is to your CNI YAML config. Let us know if that helps.

spowelljr commented 3 years ago

Hi @slowfranklin, we haven't heard back from you, do you still have this issue? There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.

I will close this issue for now but feel free to reopen when you feel ready to provide more information.

takumade commented 2 years ago

minikube start usually installs a CNI by default, so I suspect installing a new one on top is what's causing the issue.

Try running minikube start --cni=images/multus-daemonset.yml or whatever the path is to your CNI YAML config. Let us know if that helps.

It Worked

Mine was: minikube start --cni=deployments/multus-daemonset.yml

Which didn't work but then I did this:

$ minikube delete $ minikube start --cni=deployments/multus-daemonset.yml

Then I followed these instructions

And I end up with something like this:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
    link/sit 0.0.0.0 brd 0.0.0.0
4: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether a2:ef:7f:7f:ea:1f brd ff:ff:ff:ff:ff:ff
    inet 10.88.0.3/16 brd 10.88.255.255 scope global eth0
       valid_lft forever preferred_lft forever
5: net1@sit0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 82:55:34:9f:80:4f brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.200/24 brd 192.168.1.255 scope global net1
       valid_lft forever preferred_lft forever

I am grateful @sharifelgamal and @slowfranklin

extde commented 2 years ago

Please re-open. Does not work on Windows/Docker/minikube/multus.v.3.9.1.

Here are my steps:

  1. Delete minilube for clean test.
    
    PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> minikube delete
    * Deleting "minikube" in docker ...
    * Deleting container "minikube" ...
    * Removing C:\Users\Dmitriy.Trifonov\.minikube\machines\minikube ...
    * Removed all traces of the "minikube" cluster.
2. Start minikube.

PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> minikube start

3. Check out all pods running correctly (default CNI is working).

PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d4b75cb6d-969q7 1/1 Running 0 67s kube-system etcd-minikube 1/1 Running 0 80s kube-system kube-apiserver-minikube 1/1 Running 0 80s kube-system kube-controller-manager-minikube 1/1 Running 0 80s kube-system kube-proxy-796zz 1/1 Running 0 67s kube-system kube-scheduler-minikube 1/1 Running 0 80s kube-system storage-provisioner 1/1 Running 1 (46s ago) 78s

4. Checking configuration on minikube node 

ls -la /etc/cni/net.d

total 16 drwxr-xr-x 2 root root 4096 Jul 29 17:32 . drwxr-xr-x 1 root root 4096 Sep 12 10:56 .. -rw-r--r-- 1 root root 438 Nov 11 2021 100-crio-bridge.conf -rw-r--r-- 1 root root 54 Nov 11 2021 200-loopback.conf

cat /etc/cni/net.d/100-crio-bridge.conf

{ "cniVersion": "0.3.1", "name": "crio", "type": "bridge", "bridge": "cni0", "isGateway": true, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local", "routes": [ { "dst": "0.0.0.0/0" }, { "dst": "1100:200::1/24" } ], "ranges": [ [{ "subnet": "10.85.0.0/16" }], [{ "subnet": "1100:200::/24" }] ] } }

ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: tunl0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 3: sit0@NONE: mtu 1480 qdisc noop state DOWN group default qlen 1000 link/sit 0.0.0.0 brd 0.0.0.0 4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:01:e5:14:a6 brd ff:ff:ff:ff:ff:ff inet 172.18.0.1/16 brd 172.18.255.255 scope global docker0 valid_lft forever preferred_lft forever 6: veth5e73df7@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether fe:a6:26:0a:55:fb brd ff:ff:ff:ff:ff:ff link-netnsid 1 24: eth0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever

fixed ipv6 issue

sysctl -w net.ipv6.conf.all.disable_ipv6=0

net.ipv6.conf.all.disable_ipv6 = 0

sysctl -w net.ipv6.conf.default.disable_ipv6=0

net.ipv6.conf.default.disable_ipv6 = 0

sysctl -p

5. Deploying multus

PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> cd .\multus-cni\ PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test\multus-cni> cat ./deployments/multus-daemonset.yml | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/network-attachment-definitions.k8s.cni.cncf.io created clusterrole.rbac.authorization.k8s.io/multus created clusterrolebinding.rbac.authorization.k8s.io/multus created serviceaccount/multus created configmap/multus-cni-config created daemonset.apps/kube-multus-ds created


6. Checking out multus pod up and running.

PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test\multus-cni> kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d4b75cb6d-x2722 1/1 Running 0 25m kube-system etcd-minikube 1/1 Running 0 25m kube-system kube-apiserver-minikube 1/1 Running 0 25m kube-system kube-controller-manager-minikube 1/1 Running 0 25m kube-system kube-multus-ds-5bjgh 1/1 Running 0 110s kube-system kube-proxy-zl7zm 1/1 Running 0 25m kube-system kube-scheduler-minikube 1/1 Running 0 25m kube-system storage-provisioner 1/1 Running 1 (25m ago) 25m

7. Deploying network attachment.

PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> cat .\macvlan.yml apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: macvlan-conf spec: config: '{ "cniVersion": "0.3.1", "type": "macvlan", "master": "eth0", "mode": "bridge", "ipam": { "type": "host-local", "subnet": "192.168.1.0/24", "rangeStart": "192.168.1.200", "rangeEnd": "192.168.1.216", "routes": [ { "dst": "0.0.0.0/0" } ], "gateway": "192.168.1.1" } }'

PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> cat .\macvlan.yml | kubectl apply -f - networkattachmentdefinition.k8s.cni.cncf.io/macvlan-conf created

PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> kubectl get network-attachment-definitions NAME AGE macvlan-conf 8s

PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> kubectl describe network-attachment-definitions macvlan-conf Name: macvlan-conf Namespace: default Labels: Annotations: API Version: k8s.cni.cncf.io/v1 Kind: NetworkAttachmentDefinition Metadata: Creation Timestamp: 2022-09-12T13:00:43Z Generation: 1 Managed Fields: API Version: k8s.cni.cncf.io/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:spec: .: f:config: Manager: kubectl-client-side-apply Operation: Update Time: 2022-09-12T13:00:43Z Resource Version: 3625 UID: de34eae9-6ef1-4ca6-be0e-8f64f041a243 Spec: Config: { "cniVersion": "0.3.1", "type": "macvlan", "master": "eth0", "mode": "bridge", "ipam": { "type": "host-local", "subnet": "192.168.1.0/24", "rangeStart": "192.168.1.200", "rangeEnd": "192.168.1.216", "routes": [ { "dst": "0.0.0.0/0" } ], "gateway": "192.168.1.1" } } Events:


8. Deploying sample pod

PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> cat .\samplepod.yml apiVersion: v1 kind: Pod metadata: name: samplepod annotations: k8s.v1.cni.cncf.io/networks: macvlan-conf spec: containers:

PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> cat .\samplepod.yml | kubectl apply -f - pod/samplepod created


9. Checking interface configuration on sample pod.

PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> kubectl exec -it samplepod -- ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: tunl0@NONE: mtu 1480 qdisc noop state DOWN qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 3: sit0@NONE: mtu 1480 qdisc noop state DOWN qlen 1000 link/sit 0.0.0.0 brd 0.0.0.0 7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff inet 172.18.0.3/16 brd 172.18.255.255 scope global eth0 valid_lft forever preferred_lft forever



Result: **No additional adapter!**     

Spent 3 days trying to fix this (fixing configuration issues:
1. replaced flannel in your configs with bridge (see files in attachment).
2. fixed some wrong paths.
3. still no luck. )

[multus-test.zip](https://github.com/kubernetes/minikube/files/9554802/multus-test.zip)