Closed slowfranklin closed 3 years ago
minikube start
usually installs a CNI by default, so I suspect installing a new one on top is what's causing the issue.
Try running minikube start --cni=images/multus-daemonset.yml
or whatever the path is to your CNI YAML config. Let us know if that helps.
Hi @slowfranklin, we haven't heard back from you, do you still have this issue? There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.
I will close this issue for now but feel free to reopen when you feel ready to provide more information.
minikube start
usually installs a CNI by default, so I suspect installing a new one on top is what's causing the issue.Try running
minikube start --cni=images/multus-daemonset.yml
or whatever the path is to your CNI YAML config. Let us know if that helps.
Mine was:
minikube start --cni=deployments/multus-daemonset.yml
Which didn't work but then I did this:
$ minikube delete
$ minikube start --cni=deployments/multus-daemonset.yml
Then I followed these instructions
And I end up with something like this:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
4: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether a2:ef:7f:7f:ea:1f brd ff:ff:ff:ff:ff:ff
inet 10.88.0.3/16 brd 10.88.255.255 scope global eth0
valid_lft forever preferred_lft forever
5: net1@sit0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 82:55:34:9f:80:4f brd ff:ff:ff:ff:ff:ff
inet 192.168.1.200/24 brd 192.168.1.255 scope global net1
valid_lft forever preferred_lft forever
I am grateful @sharifelgamal and @slowfranklin
Please re-open. Does not work on Windows/Docker/minikube/multus.v.3.9.1.
Here are my steps:
PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> minikube delete
* Deleting "minikube" in docker ...
* Deleting container "minikube" ...
* Removing C:\Users\Dmitriy.Trifonov\.minikube\machines\minikube ...
* Removed all traces of the "minikube" cluster.
2. Start minikube.
PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> minikube start
3. Check out all pods running correctly (default CNI is working).
PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d4b75cb6d-969q7 1/1 Running 0 67s kube-system etcd-minikube 1/1 Running 0 80s kube-system kube-apiserver-minikube 1/1 Running 0 80s kube-system kube-controller-manager-minikube 1/1 Running 0 80s kube-system kube-proxy-796zz 1/1 Running 0 67s kube-system kube-scheduler-minikube 1/1 Running 0 80s kube-system storage-provisioner 1/1 Running 1 (46s ago) 78s
4. Checking configuration on minikube node
total 16 drwxr-xr-x 2 root root 4096 Jul 29 17:32 . drwxr-xr-x 1 root root 4096 Sep 12 10:56 .. -rw-r--r-- 1 root root 438 Nov 11 2021 100-crio-bridge.conf -rw-r--r-- 1 root root 54 Nov 11 2021 200-loopback.conf
{ "cniVersion": "0.3.1", "name": "crio", "type": "bridge", "bridge": "cni0", "isGateway": true, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local", "routes": [ { "dst": "0.0.0.0/0" }, { "dst": "1100:200::1/24" } ], "ranges": [ [{ "subnet": "10.85.0.0/16" }], [{ "subnet": "1100:200::/24" }] ] } }
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0@NONE:
fixed ipv6 issue
sysctl -w net.ipv6.conf.all.disable_ipv6=0
net.ipv6.conf.all.disable_ipv6 = 0
sysctl -w net.ipv6.conf.default.disable_ipv6=0
net.ipv6.conf.default.disable_ipv6 = 0
sysctl -p
5. Deploying multus
PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> cd .\multus-cni\ PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test\multus-cni> cat ./deployments/multus-daemonset.yml | kubectl apply -f - customresourcedefinition.apiextensions.k8s.io/network-attachment-definitions.k8s.cni.cncf.io created clusterrole.rbac.authorization.k8s.io/multus created clusterrolebinding.rbac.authorization.k8s.io/multus created serviceaccount/multus created configmap/multus-cni-config created daemonset.apps/kube-multus-ds created
6. Checking out multus pod up and running.
PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test\multus-cni> kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d4b75cb6d-x2722 1/1 Running 0 25m kube-system etcd-minikube 1/1 Running 0 25m kube-system kube-apiserver-minikube 1/1 Running 0 25m kube-system kube-controller-manager-minikube 1/1 Running 0 25m kube-system kube-multus-ds-5bjgh 1/1 Running 0 110s kube-system kube-proxy-zl7zm 1/1 Running 0 25m kube-system kube-scheduler-minikube 1/1 Running 0 25m kube-system storage-provisioner 1/1 Running 1 (25m ago) 25m
7. Deploying network attachment.
PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> cat .\macvlan.yml apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: macvlan-conf spec: config: '{ "cniVersion": "0.3.1", "type": "macvlan", "master": "eth0", "mode": "bridge", "ipam": { "type": "host-local", "subnet": "192.168.1.0/24", "rangeStart": "192.168.1.200", "rangeEnd": "192.168.1.216", "routes": [ { "dst": "0.0.0.0/0" } ], "gateway": "192.168.1.1" } }'
PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> cat .\macvlan.yml | kubectl apply -f - networkattachmentdefinition.k8s.cni.cncf.io/macvlan-conf created
PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> kubectl get network-attachment-definitions NAME AGE macvlan-conf 8s
PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> kubectl describe network-attachment-definitions macvlan-conf
Name: macvlan-conf
Namespace: default
Labels:
8. Deploying sample pod
PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> cat .\samplepod.yml apiVersion: v1 kind: Pod metadata: name: samplepod annotations: k8s.v1.cni.cncf.io/networks: macvlan-conf spec: containers:
PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> cat .\samplepod.yml | kubectl apply -f - pod/samplepod created
9. Checking interface configuration on sample pod.
PS C:\Users\Dmitriy.Trifonov\Desktop\private\corporate\architecture\k8s\multus-test> kubectl exec -it samplepod -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: tunl0@NONE:
Result: **No additional adapter!**
Spent 3 days trying to fix this (fixing configuration issues:
1. replaced flannel in your configs with bridge (see files in attachment).
2. fixed some wrong paths.
3. still no luck. )
[multus-test.zip](https://github.com/kubernetes/minikube/files/9554802/multus-test.zip)
I can't get multus CNI to work with minikube. I'm net to K8 and minikube so I'm probably missing something obvious.
I realize that this is not the appropriate place for this kind of support request, but it seems the Slack channel is not accepting new users and the user mailing list is dead. If there's any other place (IRC channel? Matrix?) to ask such questions let me know.
Steps to reproduce the issue:
β /usr/bin/kubectl is version 1.18.2, which may have incompatibilites with Kubernetes 1.20.2. βͺ Want kubectl v1.20.2? Try 'minikube kubectl -- get pods -A' π Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: samba-ad-dc-static-ip-conf-1 spec: config: '{ "cniVersion": "0.3.1", "name": "lan", "type": "bridge", "bridge": "br-lan", "ipam": { "type": "static", "addresses": [ {"address": "192.168.3.2/24"} ] } }'
apiVersion: apps/v1 kind: StatefulSet metadata: name: dc spec: selector: matchLabels: app: samba-ad-dc serviceName: samba-ad-dc replicas: 1 template: metadata: labels: app: samba-ad-dc annotations: k8s.v1.cni.cncf.io/networks: samba-ad-dc-static-ip-conf-1 ...