canonical / microk8s

MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
https://microk8s.io
Apache License 2.0
8.39k stars 766 forks source link

How to chage a IP range #276

Open niwashing opened 5 years ago

niwashing commented 5 years ago

IP range 10.1.1.0/24, 10.152.183.0/24 are userd for cluster or pods by default. How can I change a cluster IP and IP range that is assigned to nodes?

ktsakalozos commented 5 years ago

Hi @niwashing

These two IP ranges are configured in a couple places:

10.152.183.0/24:

10.1.1.0/24:

I hope I am not missing anything. Remember to stop/start MicroK8s after you update any of those arguments.

Thank you for using MicroK8s

niwashing commented 5 years ago

@ktsakalozos

Thank you! I could change ip range, but microk8s.enable dns still throw error as follows:

$ microk8s.enable dns
Enabling DNS
Applying manifest
serviceaccount/kube-dns unchanged
configmap/kube-dns unchanged
deployment.extensions/kube-dns configured
The Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "10.152.183.10": provided IP is not in the valid range. The range of valid IPs is 172.22.183.0/24
Failed to enable dns

Although I found /snap/microk8s/354/actions/dns.yaml and /snap/microk8s/354/actions/enable.dns.sh still include a default IP range, they cannot be changed due to snap permission.

Do I have to build microk8s from source by using snapcraft?

ktsakalozos commented 5 years ago

You have the option to recompile MicroK8s and produce your own .snap file.

I suspect that after microk8s.enable dns you can also microk8s.kubectl edit the part of the dns manifest that is failing.

niwashing commented 5 years ago

I suspect that after microk8s.enable dns you can also microk8s.kubectl edit the part of the dns >manifest that is failing.

Sorry, I'm not very familier with kubernetes, but kubectl edit is not available because kube-dns has not been deployed yet due to ip range error?

I would appreciate if you could show me entier commands.

ktsakalozos commented 5 years ago

Sure, here is what I have:

> microk8s.kubectl get all --all-namespaces
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE
kube-system   pod/kube-dns-6ccd496668-qx5m4   3/3     Running   0          41s

NAMESPACE     NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       service/kubernetes   ClusterIP   10.152.183.1    <none>        443/TCP         72s
kube-system   service/kube-dns     ClusterIP   10.152.183.10   <none>        53/UDP,53/TCP   41s

NAMESPACE     NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/kube-dns   1/1     1            1           41s

NAMESPACE     NAME                                  DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/kube-dns-6ccd496668   1         1         1       42s

At this point I have kube-dns running, it should be failing in your case. I suspect you can go and edit the kube-dns service clusterIP with:

microk8s.kubectl edit -n kube-system service/kube-dns

If this is does not work you will need to download and edit this file https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/dns.yaml and then microk8s.kubectl apply -f ./dns.yaml

fquinner commented 5 years ago

I can confirm I just hit this too because i didn't enable dns before adding pods so there was an IP conflict. I can also confirm that the installation of the dns.yaml definition was required since dns never started.

AdamIsrael commented 5 years ago

What happens when the microk8s snap refreshes? Since the files to be edited are in /var/snap/microk8s/current/, I suspect the changes will revert back to the defaults.

What about storing the subnet to use in /var/snap/microk8s/common, and modifying the configuration to get the value from that file?

ktsakalozos commented 5 years ago

@AdamIsrael I do not think this is what happens during a refresh. ${SNAP_DATA} is backed up and could be reverted (contrary to ${SNAP_COMMON}) but the contents are preserved. If you configure your daemons in a specific way we respect your configuration, we do not overwrite the configuration with the defaults. Have a look here https://forum.snapcraft.io/t/proper-way-to-simulate-a-snap-refresh-release/5565 on how you can simulate a refresh and check this yourself.

AdamIsrael commented 5 years ago

@ktsakalozos That's good to know, thanks!

I still think having the subnet defined in a single location would be better. Also perhaps a command for changing it to a specific or random subnet?

The specific use-case I have is that I may install microk8s to lxd, and thus have multiple microk8s on a host. I can create new networks in lxd for each, but I then need for each microk8s to use the appropriate network.

evilhamsterman commented 4 years ago

I agree with @AdamIsrael, I started playing with K8s and decided that since we were and Ubuntu shop microk8s would be a great way to get started. However we also happen to use the subnet 10.1.1.0/24 and so the cbr0 interface caused issues for me accessing portions of our network.

camille-rodriguez commented 4 years ago

Hello everyone, @ktsakalozos, I followed these steps to deploy microk8s on a different range (simply 10.152.182.0/24). However, I run into a similar issue when trying to enable istio. It tries to re-deploy kube-dns (even if it is already deployed) and gets stuck because of the clusterIP spec again.

$ microk8s.kubectl get all --all-namespaces
NAMESPACE            NAME                                        READY   STATUS             RESTARTS   AGE
container-registry   pod/registry-d7d7c8bc9-g86qw                0/1     Pending            0          15h
kube-system          pod/coredns-9b8997588-v2hs6                 0/1     Running            3          16h
kube-system          pod/hostpath-provisioner-7b9cb5cdb4-c2z2l   0/1     CrashLoopBackOff   22         15h
kube-system          pod/kube-dns-579bd8fb8d-gh2m6               0/3     InvalidImageName   0          15h

NAMESPACE            NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
container-registry   service/registry     NodePort    10.152.182.65   <none>        5000:32000/TCP   15h
default              service/kubernetes   ClusterIP   10.152.182.1    <none>        443/TCP          17h
kube-system          service/kube-dns     ClusterIP   10.152.182.10   <none>        53/UDP,53/TCP    15h

NAMESPACE            NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
container-registry   deployment.apps/registry               0/1     1            0           15h
kube-system          deployment.apps/coredns                0/1     1            0           16h
kube-system          deployment.apps/hostpath-provisioner   0/1     1            0           15h
kube-system          deployment.apps/kube-dns               0/1     1            0           15h

NAMESPACE            NAME                                              DESIRED   CURRENT   READY   AGE
container-registry   replicaset.apps/registry-d7d7c8bc9                1         1         0       15h
kube-system          replicaset.apps/coredns-9b8997588                 1         1         0       16h
kube-system          replicaset.apps/hostpath-provisioner-7b9cb5cdb4   1         1         0       15h
kube-system          replicaset.apps/kube-dns-579bd8fb8d               1         1         0       15h

$ microk8s.enable istio
Enabling Istio
Enabling DNS
Applying manifest
serviceaccount/coredns unchanged
configmap/coredns unchanged
deployment.apps/coredns unchanged
clusterrole.rbac.authorization.k8s.io/coredns unchanged
clusterrolebinding.rbac.authorization.k8s.io/coredns unchanged
The Service "kube-dns" is invalid: spec.clusterIP: Invalid value: "10.152.183.10": field is immutable
Failed to enable dns
Failed to enable istio

Could you recommend what I should do with this, or which files I should download and edit? Thank you!

sadoMasupilami commented 4 years ago

Hello, this problem is still relevant now with flannel. we do a lot of work from home (using VPN) and many of our internal services run in 10.1.0.0/16

is there any way to change the adresses with flannel? ui tried to change /var/snap/microk8s/current/args/flannel-network-mgr-config and after a reset and reboot the new range is used. But sometimes it just stops working as flannel daemon can not start with:

Jul 02 09:50:48 xxx microk8s.daemon-flanneld[1747]: error #0: dial tcp: lookup none on 127.0.0.53:53: server misbehaving Jul 02 09:50:53 xxx microk8s.daemon-flanneld[2174]: Error: dial tcp: lookup none on 127.0.0.53:53: server misbehaving

we want to replace minikube with microk8s for local development on linux. So it would be really important to us.

Thanks for any good tips

strigona-worksight commented 4 years ago

Microk8s 1.16+

Modify:

/var/snap/microk8s/current/args/flannel-network-mgr-config

Change: "10.1.0.0/16" to: "10.8.0.0/16" (or any other range)

Then restart microk8s:

microk8s.stop

microk8s.start

uGiFarukh commented 4 years ago

Right now, what is the best option to achieve this local IP range change for the latest 1.19 microk8s without breaking any addons or functionality? And is it possible to use custom ip ranges outside of the standard ip ranges used in local? for example: 1.1.1.1? Or will this conflict with the internet?

uGiFarukh commented 4 years ago

@ktsakalozos any idea on how to achieve this properly without breaking anything?

ktsakalozos commented 4 years ago

@uGiFarukh for 1.19 we have the following:

There are two main IP ranges you may want to change.

  1. The range where cluster IPs are from. By default this is set to 10.152.183.1/24. To change the cluster ip range you need to:

    • Stop all services with microk8s.stop
    • Clean the current datastore and CNI with:
      (cd /var/snap/microk8s/current/var/kubernetes/backend/; rm -v !(cluster.key|cluster.crt) )
      echo "Address: 127.0.0.1:19001" > /var/snap/microk8s/current/var/kubernetes/backend/init.yaml
      rm /var/snap/microk8s/current/args/cni-network/calico-kubeconfig
    • Edit /var/snap/microk8s/current/args/kube-apiserver and update the --service-cluster-ip-range=10.152.183.0/24 argument of the API server.
    • Edit /var/snap/microk8s/current/certs/csr.conf.template and replace IP.2 = 10.152.183.1 with the the new IP the kubernetes service will have in the new IP range.
    • If you are also setting up a proxy update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges
    • Start all services with microk8s.start
    • Reload the CNI with microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml
    • To enable dns you should not use the packaged addon. Instead you should:
      • make a copy of the dns manifest with cp /snap/microk8s/current/actions/coredns.yaml /tmp/.
      • In this manifest copy update the clusterIP: 10.152.183.10 to an IP in the new range and replace the $iALLOWESCALATION string with false.
      • Apply the manifest with microk8s kubectl apply -f /tmp/coredns.yaml
      • Add the following two arguments on the kubelt arguments at `/var/snap/microk8s/current/args/kubelet:
        --cluster-domain cluster.local
        --cluster-dns <the cluster ip of the dns service you put in the coredns.yaml>
      • Restart MicroK8s with microk8s stop; microk8s start.
  2. The IP range pods get their IPs from. By default this is set to 10.1.0.0/16. To change this IP range you need to:

    • Edit /var/snap/microk8s/current/args/kube-proxy and update the --cluster-cidr=10.1.0.0/16 argument.
    • If you are also setting up a proxy update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges
    • Restart MicroK8s with microk8s stop; microk8s start.
    • Edit /var/snap/microk8s/current/args/cni-network/cni.yaml and replace the new IP range in.
          - name: CALICO_IPV4POOL_CIDR
            value: "10.1.0.0/16"
    • Apply the above yaml with microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml.
baritono commented 4 years ago

@ktsakalozos I tried your solution for 1.19 mentioned above, but there are still some issues:

  1. There is a line in /var/snap/microk8s/current/args/cni-network/calico-kubeconfig that says "server: https://[10.152.183.1]:443", and even if I edit it with sudo, it would be restored to its original content automatically after I restart microk8s. Does that matter?
  2. I found reference to "10.1.0.0/16" and "10.152.183.0/24" in /var/snap/microk8s/current/args/containerd-env. Should I also update that?
  3. I found reference to "10.1.0.0/16" in /var/snap/microk8s/current/args/flannel-network-mgr-config. Should I also update that?

Thanks a lot!

ktsakalozos commented 4 years ago

@baritono I revised and tested the the instructions in the above comments for the service range. Please have another look. To your questions:

  • There is a line in /var/snap/microk8s/current/args/cni-network/calico-kubeconfig that says "server: https://[10.152.183.1]:443", and even if I edit it with sudo, it would be restored to its original content automatically after I restart microk8s. Does that matter?

This is now covered y the revised version of the instructions above.

  • I found reference to "10.1.0.0/16" and "10.152.183.0/24" in /var/snap/microk8s/current/args/containerd-env. Should I also update that?

If you are using a proxy you should update this file accordingly.

  • I found reference to "10.1.0.0/16" in /var/snap/microk8s/current/args/flannel-network-mgr-config. Should I also update that?

Flannel is not used in 1.19 anymore. It is here only for backwards compatibility with the non-HA setup.

baritono commented 4 years ago

@ktsakalozos thank you so much! Now microk8s is up and running, and can happily co-exist with my Cisco VPN.

Some follow-up questions:

  1. Now when I microk8s enable dashboard then microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443, I cannot access the dashboard at http://localhost:10443. Got the following error
    $ microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443
    Forwarding from 127.0.0.1:10443 -> 8443
    Forwarding from [::1]:10443 -> 8443
    Handling connection for 10443
    Handling connection for 10443
    E0910 12:32:18.674538   29339 portforward.go:400] an error occurred forwarding 10443 -> 8443: error forwarding port 8443 to pod c20523e09aa81f81d8448079efeb79369bc89d9d47b1ecaf29db9126d5544f67, uid : failed to execute portforward in network namespace "/var/run/netns/cni-371b6a26-e533-5db1-0c43-a6bdcaa84643": socat command returns error: exit status 1, stderr: "2020/09/10 12:32:18 socat[29527] E connect(5, AF=2 127.0.0.1:8443, 16): Connection refused\n"
    E0910 12:32:18.675504   29339 portforward.go:400] an error occurred forwarding 10443 -> 8443: error forwarding port 8443 to pod c20523e09aa81f81d8448079efeb79369bc89d9d47b1ecaf29db9126d5544f67, uid : failed to execute portforward in network namespace "/var/run/netns/cni-371b6a26-e533-5db1-0c43-a6bdcaa84643": socat command returns error: exit status 1, stderr: "2020/09/10 12:32:18 socat[29528] E connect(5, AF=2 127.0.0.1:8443, 16): Connection refused\n"
  2. My main goal is to run kubeflow on microk8s, if I microk8s enable kubeflow, it seems to depend on the dns addon. Since I do not want to enable the packaged dns addon, what's the recommended way of enabling kubeflow in this setting? Should I follow the generic deploying kubeflow on existing kubernetes cluster instructions here?

Thank you again!

baritono commented 4 years ago

@ktsakalozos since dashboard was not working, I disabled it microk8s disable dashboard and restarted microk8s microk8s stop ; microk8s start.

Now the pods are not healthy. For example, log from pod calico-kube-controllers-847c8c99d-dg4rj in deployment calico-kube-controllers (namespace kube-system)

2020-09-10 20:43:02.341 [INFO][1] main.go 88: Loaded configuration from environment config=&config.Config{LogLevel:"info", ReconcilerPeriod:"5m", CompactionPeriod:"10m", EnabledControllers:"node", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", HealthEnabled:true, SyncNodeLabels:true, DatastoreType:"kubernetes"}
W0910 20:43:02.343059       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2020-09-10 20:43:02.343 [INFO][1] main.go 109: Ensuring Calico datastore is initialized
2020-09-10 20:43:05.399 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation="default" error=Get https://192.168.64.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 192.168.64.1:443: connect: no route to host
2020-09-10 20:43:05.399 [FATAL][1] main.go 114: Failed to initialize Calico datastore error=Get https://192.168.64.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 192.168.64.1:443: connect: no route to host
ktsakalozos commented 4 years ago

@baritono could you attach the microk8s inspect tarball?

BTW who is @ keshavdv ?

baritono commented 4 years ago

Sorry, @ktsakalozos I misspelled your ID! Auto-completion somehow gave me @ keshavdv .

$ microk8s inspect
[sudo] password for haosong: 
Inspecting Certificates
Inspecting services
  Service snap.microk8s.daemon-cluster-agent is running
  Service snap.microk8s.daemon-containerd is running
  Service snap.microk8s.daemon-apiserver is running
  Service snap.microk8s.daemon-apiserver-kicker is running
  Service snap.microk8s.daemon-proxy is running
  Service snap.microk8s.daemon-kubelet is running
  Service snap.microk8s.daemon-scheduler is running
  Service snap.microk8s.daemon-controller-manager is running
  Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
  Copy processes list to the final report tarball
  Copy snap list to the final report tarball
  Copy VM name (or none) to the final report tarball
  Copy disk usage information to the final report tarball
  Copy memory usage information to the final report tarball
  Copy server uptime to the final report tarball
  Copy current linux distribution to the final report tarball
  Copy openSSL information to the final report tarball
  Copy network configuration to the final report tarball
Inspecting kubernetes cluster
  Inspect kubernetes cluster

WARNING:  Docker is installed. 
File "/etc/docker/daemon.json" does not exist. 
You should create it and add the following lines: 
{
    "insecure-registries" : ["localhost:32000"] 
}
and then restart docker with: sudo systemctl restart docker
Building the report tarball
  Report tarball is at /var/snap/microk8s/1668/inspection-report-20200911_124340.tar.gz

inspection-report-20200911_124340.tar.gz

Bessonov commented 3 years ago

Wow, that's a lot of steps! Is there any movement to make it configurable (and working?). If it's a not on the roadmap, then may be set better defaults can resolve most of use cases? It seems like 10.1.1/24 is often leads to a conflict. And I'm wonder that /24 is enough, especially for "production-grade Kubernetes". Why not just use 10.152.0.0/16 and 10.153.0.0/16 as default? Or I like /12. Or look at other project defaults like rancher

Don't forget to upvote the first post to reflect the need of an easy configuration.

debu99 commented 3 years ago

i follow the steps, try to change the pod IP range only, but still not working, when i create a new pod, the ip is still 10.1.x.x

ldaroczi commented 3 years ago

These are the steps that work for me using addresses from the 172.16.0.0/12 private address range

My setup

sudo snap install microk8s --classic --channel=1.19
microk8s enable dns helm3 rbac

cat ~/.bash_aliases

alias kubectl='microk8s kubectl'
alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"

Configuration:

  1. The range where cluster IPs are from. By default this is set to 10.152.183.1/24. To change the cluster ip range you need to:

    • Stop all services with microk8s stop

    • Clean the current datastore and CNI with:

      (cd /var/snap/microk8s/current/var/kubernetes/backend/; rm -v !(cluster.key|cluster.crt) )
      echo "Address: 127.0.0.1:19001" > /var/snap/microk8s/current/var/kubernetes/backend/init.yaml
      rm /var/snap/microk8s/current/args/cni-network/calico-kubeconfig
    • Edit nano /var/snap/microk8s/current/args/kube-apiserver and update the argument of the API server --service-cluster-ip-range=10.152.183.0/24 to --service-cluster-ip-range=172.30.183.0/24 .

    • Edit nano /var/snap/microk8s/current/certs/csr.conf.template and replace IP.2 = 10.152.183.1 with the the new IP IP.2 = 172.30.183.1 the kubernetes service will have in the new IP range.

    • If you are also setting up a proxy, update nano /var/snap/microk8s/current/args/containerd-env with the respective IP ranges from:

      # NO_PROXY=10.1.0.0/16,10.152.183.0/24
    • to:

      # NO_PROXY=172.17.0.0/16,172.30.183.0/24
    • Start all services with microk8s start

    • Reload the CNI with kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml

    • To enable dns you should make a copy of the dns manifest with cp /snap/microk8s/current/actions/coredns.yaml /tmp/

    • In this manifest copy nano /tmp/coredns.yaml update the clusterIP: 10.152.183.10 to this IP clusterIP: 172.30.183.10 in the new range and replace the $ALLOWESCALATION string with false.

    • Apply the manifest with kubectl apply -f /tmp/coredns.yaml

    • Add/Modify the following two arguments on the kubelt arguments at nano /var/snap/microk8s/current/args/kubelet:

      --cluster-domain cluster.local
      --cluster-dns 172.30.183.10
    • Restart MicroK8s with microk8s stop; microk8s start.

  2. The IP range pods get their IPs from. By default this is set to 10.1.0.0/16. To change this IP range you need to:

    • Edit nano /var/snap/microk8s/current/args/kube-proxy and update the --cluster-cidr=10.1.0.0/16 argument to --cluster-cidr=172.17.0.0/16.

    • If you are also setting up a proxy, update nano /var/snap/microk8s/current/args/containerd-env with the respective IP ranges from:

      # NO_PROXY=10.1.0.0/16,10.152.183.0/24
    • to:

      # NO_PROXY=172.17.0.0/16,172.30.183.0/24
    • Restart MicroK8s with microk8s stop; microk8s start.

    • Edit nano /var/snap/microk8s/current/args/cni-network/cni.yaml and replace the new IP range from:

      - name: CALICO_IPV4POOL_CIDR
        value: "10.1.0.0/16"
    • to:

      - name: CALICO_IPV4POOL_CIDR
        value: "172.17.0.0/16"
    • Apply the above yaml with kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml.

    • Restart MicroK8s with microk8s stop; microk8s start.

Calico CTL install

kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml

Configure Calico

calicoctl get ippool -o wide
calicoctl delete pool default-ipv4-ippool
sudo reboot

The default-ipv4-ippool is recreated on reboot with the settings from /var/snap/microk8s/current/args/cni-network/cni.yaml Verify the pod IP-s! They should use the new IP range:

kubectl get pod -o wide --all-namespaces
metabsd commented 3 years ago

It would be really helpful to be able to specify all of this during installation. If I have a setup of several nodes, do I have to repeat all its changes on all the nodes?

shauryagarg2006 commented 3 years ago

Used above comments to come up with this script:

alias kubectl="microk8s kubectl"
microk8s disable dns
microk8s.stop

(cd /var/snap/microk8s/current/var/kubernetes/backend/; rm -v !(cluster.key|cluster.crt) )
echo "Address: 127.0.0.1:19001" > /var/snap/microk8s/current/var/kubernetes/backend/init.yaml
sudo rm /var/snap/microk8s/current/args/cni-network/calico-kubeconfig

sed -ie 's|10.152.183.0/24|172.30.183.0/24|g' /var/snap/microk8s/current/args/kube-apiserver
sed -ie 's|10.152.183.1|172.30.183.1|g' /var/snap/microk8s/current/certs/csr.conf.template
sed -ie 's|10.1.0.0/16,10.152.183.0/24|172.17.0.0/16,172.30.183.0/24|g' /var/snap/microk8s/current/args/containerd-env
sed -i "/--cluster-domain .*/d" /var/snap/microk8s/current/args/kubelet
sed -i "/--cluster-dns .*/d" /var/snap/microk8s/current/args/kubelet
echo "--cluster-domain cluster.local" >> /var/snap/microk8s/current/args/kubelet
echo "--cluster-dns 172.30.183.10" >> /var/snap/microk8s/current/args/kubelet
sed -ie 's|10.1.0.0/16|172.17.0.0/16|g' /var/snap/microk8s/current/args/kube-proxy
sed -ie 's|10.1.0.0/16,10.152.183.0/24|172.17.0.0/16,172.30.183.0/24|g' /var/snap/microk8s/current/args/containerd-env
sed -ie 's|10.1.0.0/16|172.17.0.0/16|g' /var/snap/microk8s/current/args/cni-network/cni.yaml

reboot 

After Reboot

alias kubectl="microk8s kubectl"
microk8s start

kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml
sleep 20

cp /snap/microk8s/current/actions/coredns.yaml /tmp/
sed -ie 's|$ALLOWESCALATION|false|g' /tmp/coredns.yaml
sed -ie 's|10.152.183.10|172.30.183.10|g' /tmp/coredns.yaml
kubectl apply -f /tmp/coredns.yaml
nlnjnj commented 3 years ago

@uGiFarukh for 1.19 we have the following:

There are two main IP ranges you may want to change.

  1. The range where cluster IPs are from. By default this is set to 10.152.183.1/24. To change the cluster ip range you need to:

    • Stop all services with microk8s.stop
    • Clean the current datastore and CNI with:
    (cd /var/snap/microk8s/current/var/kubernetes/backend/; rm -v !(cluster.key|cluster.crt) )
    echo "Address: 127.0.0.1:19001" > /var/snap/microk8s/current/var/kubernetes/backend/init.yaml
    rm /var/snap/microk8s/current/args/cni-network/calico-kubeconfig
    • Edit /var/snap/microk8s/current/args/kube-apiserver and update the --service-cluster-ip-range=10.152.183.0/24 argument of the API server.
    • Edit /var/snap/microk8s/current/certs/csr.conf.template and replace IP.2 = 10.152.183.1 with the the new IP the kubernetes service will have in the new IP range.
    • If you are also setting up a proxy update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges
    • Start all services with microk8s.start
    • Reload the CNI with microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml
    • To enable dns you should not use the packaged addon. Instead you should:

      • make a copy of the dns manifest with cp /snap/microk8s/current/actions/coredns.yaml /tmp/.
      • In this manifest copy update the clusterIP: 10.152.183.10 to an IP in the new range and replace the $iALLOWESCALATION string with false.
      • Apply the manifest with microk8s kubectl apply -f /tmp/coredns.yaml
      • Add the following two arguments on the kubelt arguments at `/var/snap/microk8s/current/args/kubelet:
      --cluster-domain cluster.local
      --cluster-dns <the cluster ip of the dns service you put in the coredns.yaml>
      • Restart MicroK8s with microk8s stop; microk8s start.
  2. The IP range pods get their IPs from. By default this is set to 10.1.0.0/16. To change this IP range you need to:

    • Edit /var/snap/microk8s/current/args/kube-proxy and update the --cluster-cidr=10.1.0.0/16 argument.
    • If you are also setting up a proxy update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges
    • Restart MicroK8s with microk8s stop; microk8s start.
    • Edit /var/snap/microk8s/current/args/cni-network/cni.yaml and replace the new IP range in.
         - name: CALICO_IPV4POOL_CIDR
           value: "10.1.0.0/16"
    • Apply the above yaml with microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml.

In the version v1.21.0 of microk8s, the configmap of coredns should be replaced manually, or the coredns pod keeps restarting.

replace forward . $NAMESERVERS string with forward . 8.8.8.8 8.8.4.4.

uchiha-pain commented 1 year ago

Hi for some reason my coredns pod is not coming up while following the guide

NAMESPACE     NAME                                           READY   STATUS             RESTARTS        AGE
kube-system   pod/calicoctl                                  1/1     Running            1 (2m10s ago)   14m
kube-system   pod/calico-kube-controllers-54c85446d4-4m97b   1/1     Running            4 (2m10s ago)   39m
kube-system   pod/calico-node-gv5mc                          1/1     Running            2 (2m10s ago)   16m
kube-system   pod/coredns-d489fb88-qwc8f                     0/1     CrashLoopBackOff   6 (26s ago)     2m54s

NAMESPACE     NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   192.168.81.97   <none>        443/TCP                  39m
kube-system   service/kube-dns     ClusterIP   192.168.81.99   <none>        53/UDP,53/TCP,9153/TCP   2m54s

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   39m

NAMESPACE     NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/calico-kube-controllers   1/1     1            1           39m
kube-system   deployment.apps/coredns                   0/1     1            0           2m54s

NAMESPACE     NAME                                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-54c85446d4   1         1         1       39m
kube-system   replicaset.apps/coredns-d489fb88                     1         1         0       2m54s

The events are

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  17s                default-scheduler  Successfully assigned kube-system/coredns-d489fb88-t4md9 to ip-192-168-81-126
  Normal   Pulled     16s (x2 over 17s)  kubelet            Container image "coredns/coredns:1.9.3" already present on machine
  Normal   Created    16s (x2 over 17s)  kubelet            Created container coredns
  Normal   Started    15s (x2 over 17s)  kubelet            Started container coredns
  Warning  BackOff    7s (x4 over 15s)   kubelet            Back-off restarting failed container

What could be the reason?

uchiha-pain commented 1 year ago

Now, I am trying to change the ip cidr for pods only and all my pods crashed after following this guide below

The IP range pods get their IPs from. By default this is set to 10.1.0.0/16. To change this IP range you need to: Edit /var/snap/microk8s/current/args/kube-proxy and update the --cluster-cidr=10.1.0.0/16 argument. If you are also setting up a proxy update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges Restart MicroK8s with microk8s stop; microk8s start. Edit /var/snap/microk8s/current/args/cni-network/cni.yaml and replace the new IP range in.

  • name: CALICO_IPV4POOL_CIDR value: "10.1.0.0/16" Apply the above yaml with microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml
LAST SEEN   TYPE      REASON                   OBJECT                                         MESSAGE
5m9s        Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "39bd045ac4a60e09c5f9feec2c17d687197b1ca423af383a5e0e223f63b1d1a5": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
4m56s       Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "21fd79ec15b8075f38db1558c47e2fa56eb45230c55c5cb7392f6656d00ec830": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
3m32s       Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4777b33fa4e4c0b4a99f4d15d217981844e55a14e4fa485614fc5f8ea8c2c13a": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
2m56s       Normal    SandboxChanged           pod/calico-node-qmnmf                          Pod sandbox changed, it will be killed and re-created.
2m55s       Normal    Pulled                   pod/calico-node-qmnmf                          Container image "docker.io/calico/cni:v3.23.3" already present on machine
2m55s       Normal    Created                  pod/calico-node-qmnmf                          Created container upgrade-ipam
2m55s       Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b541e80d9567e646d2da8b157da907b8580891145f73209e749ea88f26a59940": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
2m55s       Normal    Started                  pod/calico-node-qmnmf                          Started container upgrade-ipam
2m55s       Normal    Pulled                   pod/calico-node-qmnmf                          Container image "docker.io/calico/cni:v3.23.3" already present on machine
2m55s       Normal    Created                  pod/calico-node-qmnmf                          Created container install-cni
2m54s       Normal    Started                  pod/calico-node-qmnmf                          Started container install-cni
2m43s       Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c0c2b090d72ea457967629a6f949564de57007febe0f08889d95946f4b76faf9": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
2m32s       Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1e45b4188db7e4bd433d5389f3c7220ec0600983c8c4226db593b6b29c1c6046": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
2m17s       Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "88029f9160b78507efc4e51b421e5459fbedc829234c1c8f239244e84b903944": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
2m3s        Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "17de4a98557f07a239e27ac8a27d332af81b1aaee6e50f5a525dff74b1fdfc6b": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
2m          Normal    Pulled                   pod/calico-node-qmnmf                          Container image "docker.io/calico/node:v3.23.3" already present on machine
2m          Normal    Created                  pod/calico-node-qmnmf                          Created container calico-node
2m          Normal    Started                  pod/calico-node-qmnmf                          Started container calico-node
118s        Warning   BackOff                  pod/calico-node-qmnmf                          Back-off restarting failed container
114s        Warning   BackOff                  pod/calico-node-6l8bp                          Back-off restarting failed container
111s        Warning   FailedCreatePodSandBox   pod/calico-kube-controllers-54c85446d4-7xss2   (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ad0a77ebf57264879f7cb68860ab1c5b1f0c06df570a59c0228e52fa67d72e98": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
110s        Warning   BackOff                  pod/calico-node-577p5                          Back-off restarting failed container
109s        Warning   BackOff                  pod/calico-node-gxnxv                          Back-off restarting failed container
108s        Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "df2efd5a6f203de330ca2e9b6db4fc59687b1d4b7454d2b70788d133680af1a4": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
93s         Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "13b4202e8c0c7af66b2c8561a6ede40274b83b255fc567bf7a3d974944367edd": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
80s         Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "50b1b4bd74251e5d3b06b31d6ef1895350bbd6db4d92cabd4a129c0701e56b1e": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
66s         Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a27b18f04f20ff1fd70b31be73bcf2aa2de211e902750bf58fd7abc63f47a164": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
13s         Warning   FailedCreatePodSandBox   pod/coredns-d489fb88-k8cq4                     (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "49fe3a56e2107fe2933eac802302fa6628f86355ea469006668c316cebecdbf1": plugin type="calico" failed (add): error getting ClusterInformation: resource does not exist: ClusterInformation(default) with error: clusterinformations.crd.projectcalico.org "default" not found
neoaggelos commented 1 year ago

Hi @uchiha-pain

The error your are seeing is that calico-node fails to start properly, therefore there is no CNI to configure the networking for the coredns pod. Can you check the logs in the calico-node pod? In your initial comment, calico seemed happy enough, perhaps starting with a fresh install can picking the logs from the coredns can help shed some light to your problem.

lehuyv commented 1 year ago

Hi, Got similar issue with MicroK8s for Windows. The internal enterprise network used also the same range of IPs, preventing container applications to access machine on the internal enterprise network. It will be nice to add in the installer a configuration step to configure the range of IPs used by MicroK8s.

asgarli commented 1 year ago

Another use-case for this is having internal services which need static IP addresses. My specific use-case is I have bind9 and pihole services, where bind9 defines internal DNS addresses and uses pihole as a forwarder. Therefore, the pihole service needs a static address I can use inside forwarders in named.conf. Right now I'm using 10.152.183.X IP address, but I'd like to specify it explicitly, rather than depend on an implementation detail of microk8s.

neoaggelos commented 1 year ago

Hi @asgarli

Apologies if I am missing something, is your issue that you want a specific IP address to be assigned to a Kubernetes service? This issue is about configuring the CIDR from which pods are assigned IPs in the cluster.

For your case, you should be able to pick a ClusterIP manually (note: it will obviously fail if another service is already using it):

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    $key: $value
  clusterIP: 10.152.183.100
  ports: [...]

This will create my-service with a hardcoded cluster IP 10.152.183.100.

asgarli commented 1 year ago

@neoaggelos Thank you for a response! That's exactly what I did. But I don't think it's the best solution because I depend on the implementation detail of Microk8s that the IP range of services is 10.152.183.0/24, but correct me if I'm wrong!

neoaggelos commented 1 year ago

@asgarli The service CIDR is a configuration setting in the kube-apiserver. For MicroK8s, this is always set to 10.152.183.0/24 and there are no plans to change it going forward. Effectively, you can safely assume that this will always be the service CIDR.

nat45928 commented 1 year ago

I needed to modify this configuration as well for my application. The current process is a major pain to automate and could use some kind of configuration option.

amruthomkari commented 1 year ago

@uGiFarukh for 1.19 we have the following: There are two main IP ranges you may want to change.

  1. The range where cluster IPs are from. By default this is set to 10.152.183.1/24. To change the cluster ip range you need to:

    • Stop all services with microk8s.stop
    • Clean the current datastore and CNI with:
    (cd /var/snap/microk8s/current/var/kubernetes/backend/; rm -v !(cluster.key|cluster.crt) )
    echo "Address: 127.0.0.1:19001" > /var/snap/microk8s/current/var/kubernetes/backend/init.yaml
    rm /var/snap/microk8s/current/args/cni-network/calico-kubeconfig
    • Edit /var/snap/microk8s/current/args/kube-apiserver and update the --service-cluster-ip-range=10.152.183.0/24 argument of the API server.
    • Edit /var/snap/microk8s/current/certs/csr.conf.template and replace IP.2 = 10.152.183.1 with the the new IP the kubernetes service will have in the new IP range.
    • If you are also setting up a proxy update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges
    • Start all services with microk8s.start
    • Reload the CNI with microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml
    • To enable dns you should not use the packaged addon. Instead you should:

      • make a copy of the dns manifest with cp /snap/microk8s/current/actions/coredns.yaml /tmp/.
      • In this manifest copy update the clusterIP: 10.152.183.10 to an IP in the new range and replace the $iALLOWESCALATION string with false.
      • Apply the manifest with microk8s kubectl apply -f /tmp/coredns.yaml
      • Add the following two arguments on the kubelt arguments at `/var/snap/microk8s/current/args/kubelet:
      --cluster-domain cluster.local
      --cluster-dns <the cluster ip of the dns service you put in the coredns.yaml>
      • Restart MicroK8s with microk8s stop; microk8s start.
  2. The IP range pods get their IPs from. By default this is set to 10.1.0.0/16. To change this IP range you need to:

    • Edit /var/snap/microk8s/current/args/kube-proxy and update the --cluster-cidr=10.1.0.0/16 argument.
    • If you are also setting up a proxy update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges
    • Restart MicroK8s with microk8s stop; microk8s start.
    • Edit /var/snap/microk8s/current/args/cni-network/cni.yaml and replace the new IP range in.
         - name: CALICO_IPV4POOL_CIDR
           value: "10.1.0.0/16"
    • Apply the above yaml with microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml.

In the version v1.21.0 of microk8s, the configmap of coredns should be replaced manually, or the coredns pod keeps restarting.

replace forward . $NAMESERVERS string with forward . 8.8.8.8 8.8.4.4.

was this solution fixed in some release of microk8s? or any script to handle this when we see CIDR conflicts with outside world IPs?

amruthomkari commented 1 year ago

@uGiFarukh for 1.19 we have the following:

There are two main IP ranges you may want to change.

  1. The range where cluster IPs are from. By default this is set to 10.152.183.1/24. To change the cluster ip range you need to:

    • Stop all services with microk8s.stop
    • Clean the current datastore and CNI with:
    (cd /var/snap/microk8s/current/var/kubernetes/backend/; rm -v !(cluster.key|cluster.crt) )
    echo "Address: 127.0.0.1:19001" > /var/snap/microk8s/current/var/kubernetes/backend/init.yaml
    rm /var/snap/microk8s/current/args/cni-network/calico-kubeconfig
    • Edit /var/snap/microk8s/current/args/kube-apiserver and update the --service-cluster-ip-range=10.152.183.0/24 argument of the API server.
    • Edit /var/snap/microk8s/current/certs/csr.conf.template and replace IP.2 = 10.152.183.1 with the the new IP the kubernetes service will have in the new IP range.
    • If you are also setting up a proxy update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges
    • Start all services with microk8s.start
    • Reload the CNI with microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml
    • To enable dns you should not use the packaged addon. Instead you should:

      • make a copy of the dns manifest with cp /snap/microk8s/current/actions/coredns.yaml /tmp/.
      • In this manifest copy update the clusterIP: 10.152.183.10 to an IP in the new range and replace the $iALLOWESCALATION string with false.
      • Apply the manifest with microk8s kubectl apply -f /tmp/coredns.yaml
      • Add the following two arguments on the kubelt arguments at `/var/snap/microk8s/current/args/kubelet:
      --cluster-domain cluster.local
      --cluster-dns <the cluster ip of the dns service you put in the coredns.yaml>
      • Restart MicroK8s with microk8s stop; microk8s start.
  2. The IP range pods get their IPs from. By default this is set to 10.1.0.0/16. To change this IP range you need to:

    • Edit /var/snap/microk8s/current/args/kube-proxy and update the --cluster-cidr=10.1.0.0/16 argument.
    • If you are also setting up a proxy update /var/snap/microk8s/current/args/containerd-env with the respective IP ranges
    • Restart MicroK8s with microk8s stop; microk8s start.
    • Edit /var/snap/microk8s/current/args/cni-network/cni.yaml and replace the new IP range in.
         - name: CALICO_IPV4POOL_CIDR
           value: "10.1.0.0/16"
    • Apply the above yaml with microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml.

Following these steps to update range '10.152.183.1/24.' is causing microk8s go unstable and doesnt come up. Any recommendation on when exactly to run these steps?

sachinkumarsingh092 commented 1 year ago

Starting from 1.28, you can do this with launch configurations. See this comment: https://github.com/canonical/microk8s/issues/4128#issuecomment-1671173747

melkamar commented 10 months ago

@sachinkumarsingh092 is it possible to use that configuration when I have a long-running cluster of many nodes already and not setting one from scratch?

I have hit the problem with not having enough service IPs, but I cannot find any information about how to update the IP range on a multi-node cluster.

neoaggelos commented 10 months ago

Hi @melkamar

Unfortunately, changing the service CIDR for a running multi-node cluster is not straightforward, and requires manual actions depending on your setup and the CNI being used. This is not a supported action in MicroK8s.

melkamar commented 10 months ago

Welp, thanks for the reply @neoaggelos .

How about this approach, should it work?

  1. Leave all nodes but one from the cluster
  2. Change the service CIDR in the now-singlenode cluster
  3. Re-add the nodes to the new "main" node

If I did this (or in general, when adding a node to an existing cluster with a custom service CIDR option), will the to-be-added nodes pick up the service CIDR configuration, or do I need to manually ensure that the configuration on the new node matches the cluster before joining it in?

neoaggelos commented 10 months ago

This would still be problematic, and removing/readding nodes would not necessarily help with the issues around changing the service CIDR.

NOTE: The steps described below might be catastrophic for your cluster, follow at your own risk.

In short, the following steps are needed (assuming you want to change the service CIDR to 10.200.0.0/16):

Repeat the steps below for all control plane nodes in the cluster.

Finally, at this point new services should be getting a cluster IP in the new range (10.200.0.0/16). Existing services will not be updated, and one way is to delete and re-create them.

melkamar commented 10 months ago

Existing services will not be updated, and one way is to delete and re-create them.

🙁 that's what I was hoping to avoid. We have been hitting the default /24 service size limit on our 7-node cluster with over 100 namespaces. So far I've been able to juggle the services needed at a given time and manually deleting the ones we can do without. Recreating all the services sounds like a major pain.

I understand this is the way things are now, but this is a ticking time bomb for users that gradually adopt microk8s and grow with it. A warning in the setup documentation is warranted, I think.

Thanks for the information though

neoaggelos commented 10 months ago

That's fair, and a good point for us to take, thanks.

Note that for MicroK8s 1.28+, the process itself is much easier for new clusters, using launch configurations https://microk8s.io/docs/how-to-dual-stack#customise-the-network-in-your-deployment-1

dgdevx commented 3 months ago

Hi all, we are moving from minikube to microk8s but we are running in to the same issues mentioned here and nothing seems to make it work with a VPN (cisco). This are the steps I've tried. I've used the microk8s launch configuration to change the ip ranges to not collide with the VPN using the following launch configurator file

---
version: 0.1.0
extraCNIEnv:
  IPv4_SUPPORT: true
  IPv4_CLUSTER_CIDR: 192.168.59.0/16
  IPv4_SERVICE_CIDR: 10.96.0.0/24
  IPv6_SUPPORT: false
  IPv6_CLUSTER_CIDR: fd02::/64
  IPv6_SERVICE_CIDR: fd99::/108
extraSANs:
  - 10.96.0.1

These are the ip ranges of MiniKube that work fine https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/

While this at least get me past this error

[ERROR] plugin/errors: 2 4477270393948209493.7561272857333556513. HINFO: read udp 10.1.150.23:58353->192.168.1.1:53: i/o timeout [WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.152.183.1:443/version": dial tcp 10.152.183.1:443: connect: no route to host

nothing works,

DNS, GPU and any addon do not work at all.

MissingClusterDNS 113s (x41 over 11m) kubelet pod: "calico-kube-controllers-77bd7c5b-pmx7n_kube-system(f5fd44e3-da1b-4430-9843-e16a6af077d1)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy

I have spent the last 4 days trying every possible solution I can find online and I;ve gotten nowhere. I would really appreciate and send you a beer where ever you are located. If you could help me out with this. I would also consider changing the microk8s default ranges as proposed on this thread to ease the installation and usage in a remote work world. I appreciate any insights and I am happy to provide any information needed. Thanks