Open aelsnz opened 1 year ago
tried updating colima to latest as well as QEMU to latest, but no difference, also tried latest k3s ("v1.26.3+k3s1") , still same:
colima version HEAD-65ee3d2
git commit: 65ee3d284b81bfebc09783f5e35a186584aa4bc1
But still getting:
> * Starting k3s ... [ ok ]
TRAC[0076] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0079] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0081] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0083] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0085] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0088] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0090] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0092] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0094] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0096] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0098] cmd ["lima" "kubectl" "cluster-info"]
FATA[0099] error starting kubernetes: error running [lima kubectl cluster-info], output: "E0403 02:43:47.400197 4109 memcache.go:265] couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused", err: "exit status 1"
colima-testx:/Users/aelsnz$ ifconfig
col0 Link encap:Ethernet HWaddr 52:55:55:63:9A:1E
inet6 addr: fe80::5055:55ff:fe63:9a1e/64 Scope:Link
inet6 addr: fde2:fcb5:a448:d160:5055:55ff:fe63:9a1e/64 Scope:Global
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:33 errors:0 dropped:0 overruns:0 frame:0
TX packets:69 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5868 (5.7 KiB) TX bytes:20014 (19.5 KiB)
...
k3s.log:
time="2023-04-03T02:48:42Z" level=info msg="Database tables and indexes are up to date"
time="2023-04-03T02:48:42Z" level=info msg="Kine available at unix://kine.sock"
time="2023-04-03T02:48:42Z" level=fatal msg="starting kubernetes: preparing server: init cluster datastore and https: listen tcp: lookup --advertise-address: no such host"
time="2023-04-03T02:48:47Z" level=info msg="Found ip fde2:fcb5:a448:d160:5055:55ff:fe63:9a1e from iface col0"
time="2023-04-03T02:48:47Z" level=info msg="Starting k3s v1.26.3+k3s1 (01ea3ff2)"
time="2023-04-03T02:48:47Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2023-04-03T02:48:47Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2023-04-03T02:48:47Z" level=info msg="Database tables and indexes are up to date"
time="2023-04-03T02:48:47Z" level=info msg="Kine available at unix://kine.sock"
time="2023-04-03T02:48:47Z" level=fatal msg="starting kubernetes: preparing server: init cluster datastore and https: listen tcp: lookup --advertise-address: no such host"
I think the recommendation for anyone interested in a network address would be using the --vm-type vz
.
Qemu is actually more stable than VZ but allocating network address has been an issue since Ventura.
Do you get same behaviour when you test with a separate profile?
# start with a profile named 'another'
colima start another
I have tried using "--vm-type vz" but same issue. The "col0" interface inside the VM does not seem to get assigned.
Example using default profile:
$ colima start -c4 -d20 -m8 -k --network-address --vm-type vz --very-verbose
TRAC[0000] cmd ["limactl" "info"]
TRAC[0000] cmd ["limactl" "list" "colima" "--json"]
TRAC[0000] error retrieving running instance: instance 'colima' does not exist
INFO[0000] starting colima
INFO[0000] runtime: docker+k3s
INFO[0000] creating and starting ... context=vm
TRAC[0000] cmd ["limactl" "start" "--tty=false" "/var/folders/_p/2854n4xs4nd7cdhv4llpj5dr0000gn/T/colima.yaml"]
> Terminal is not available, proceeding without opening an editor
> `vmType: vz` is experimental
> "Attempting to download the image from \"https://github.com/abiosoft/alpine-lima/releases/download/colima-v0.5.0-2/alpine-lima-clm-3.16.2-aarch64.iso\"" digest="sha512:06abfa8c9fd954f8bfe4ce226bf282dd06e9dfbcd09f57566bf6c20809beb5a3367415b515e0a65d6a1638ecfd3a3bb3fb6d654dee3d72164bd0279370448507"
> Using cache "/Users/aelsnz/Library/Caches/lima/download/by-url-sha256/c37acb6308026b2fe12f6c0ef3371f690b3e33ee6b5d37d5dc68684f8fd5ee52/data"
> [hostagent] Starting VZ (hint: to watch the boot progress, see "/Users/aelsnz/.lima/colima/serial.log")
> SSH Local Port: 54703
> [hostagent] new connection from to
> [hostagent] [VZ] - vm state change: running
> [hostagent] Waiting for the essential requirement 1 of 3: "ssh"
> [hostagent] 2023/04/03 16:11:13 tcpproxy: for incoming conn 127.0.0.1:54705, error dialing "192.168.5.15:22": connect tcp 192.168.5.15:22: no route to host
> [hostagent] Waiting for the essential requirement 1 of 3: "ssh"
> [hostagent] 2023/04/03 16:11:23 tcpproxy: for incoming conn 127.0.0.1:54707, error dialing "192.168.5.15:22": connect tcp 192.168.5.15:22: connection was refused
> [hostagent] Waiting for the essential requirement 1 of 3: "ssh"
> [hostagent] 2023/04/03 16:11:33 tcpproxy: for incoming conn 127.0.0.1:54709, error dialing "192.168.5.15:22": connect tcp 192.168.5.15:22: connection was refused
> [hostagent] Waiting for the essential requirement 1 of 3: "ssh"
> [hostagent] The essential requirement 1 of 3 is satisfied
> [hostagent] Waiting for the essential requirement 2 of 3: "user session is ready for ssh"
> [hostagent] The essential requirement 2 of 3 is satisfied
> [hostagent] Waiting for the essential requirement 3 of 3: "the guest agent to be running"
> [hostagent] The essential requirement 3 of 3 is satisfied
> [hostagent] Waiting for the final requirement 1 of 1: "boot scripts must have finished"
> [hostagent] Forwarding "/var/run/docker.sock" (guest) to "/Users/aelsnz/.colima/default/docker.sock" (host)
> [hostagent] Forwarding "/var/run/docker.sock" (guest) to "/Users/aelsnz/.colima/docker.sock" (host)
> [hostagent] The final requirement 1 of 1 is satisfied
> READY. Run `limactl shell colima` to open the shell.
TRAC[0034] cmd ["lima" "sudo" "cat" "/etc/hosts"]
TRAC[0034] cmd ["lima" "sudo" "sh" "-c" "echo -e \"192.168.5.2\\thost.docker.internal\" >> /etc/hosts"]
TRAC[0034] cmd ["lima" "sudo" "cat" "/etc/hosts"]
TRAC[0034] cmd ["lima" "sudo" "sh" "-c" "echo -e \"127.0.0.1\\tcolima\" >> /etc/hosts"]
INFO[0034] provisioning ... context=docker
TRAC[0034] cmd ["lima" "sudo" "mkdir" "-p" "/etc/docker"]
TRAC[0034] cmd int ["lima" "sudo" "sh" "-c" "cat > /etc/docker/daemon.json"]
TRAC[0034] cmd ["docker" "context" "inspect" "colima"]
TRAC[0034] cmd ["docker" "context" "create" "colima" "--description" "colima" "--docker" "host=unix:///Users/aelsnz/.colima/default/docker.sock"]
> colima
> Successfully created context "colima"
TRAC[0034] cmd ["docker" "context" "use" "colima"]
> colima
> Current context is now "colima"
INFO[0034] starting ... context=docker
TRAC[0034] cmd ["lima" "sudo" "service" "docker" "start"]
> * /var/log/docker.log: creating file
> * /var/log/docker.log: correcting owner
> * Starting Docker Daemon ... [ ok ]
TRAC[0034] cmd ["lima" "sudo" "docker" "info"]
TRAC[0039] cmd ["lima" "sudo" "docker" "info"]
INFO[0039] provisioning ... context=kubernetes
TRAC[0039] cmd ["lima" "sudo" "service" "k3s" "status"]
TRAC[0039] cmd ["lima" "k3s" "--version"]
TRAC[0039] cmd ["lima" "command" "-v" "k3s-uninstall.sh"]
TRAC[0039] cmd ["lima" "uname" "-m"]
TRAC[0039] cmd ["lima" "uname" "-m"]
TRAC[0039] cmd ["limactl" "list" "colima" "--json"]
TRAC[0040] cmd ["limactl" "shell" "colima" "sh" "-c" "ifconfig col0 | grep \"inet addr:\" | awk -F' ' '{print $2}' | awk -F':' '{print $2}'"]
INFO[0040] downloading and installing ... context=kubernetes
TRAC[0040] cmd ["lima" "cp" "/Users/aelsnz/Library/Caches/colima/caches/3bab8a47be76e1fa1cffa532435c7c150815ff9f35fda430e1e79bbc65c0eee4" "/tmp/k3s"]
TRAC[0040] cmd ["lima" "sudo" "install" "/tmp/k3s" "/usr/local/bin/k3s"]
TRAC[0040] cmd ["lima" "cp" "/Users/aelsnz/Library/Caches/colima/caches/86667b7d52bf2959ff04c8bb6b03f0c37a791b6fe7c310a8b9c625e7787a6510" "/tmp/k3s-airgap-images-arm64.tar.gz"]
TRAC[0040] cmd ["lima" "gzip" "-f" "-d" "/tmp/k3s-airgap-images-arm64.tar.gz"]
TRAC[0044] cmd ["lima" "sudo" "mkdir" "-p" "/var/lib/rancher/k3s/agent/images/"]
TRAC[0044] cmd ["lima" "sudo" "cp" "/tmp/k3s-airgap-images-arm64.tar" "/var/lib/rancher/k3s/agent/images/"]
INFO[0044] loading oci images ... context=kubernetes
TRAC[0044] cmd ["lima" "sudo" "docker" "load" "-i" "/tmp/k3s-airgap-images-arm64.tar"]
> Loaded image: rancher/mirrored-metrics-server:v0.6.1
> Loaded image: rancher/mirrored-pause:3.6
> Loaded image: rancher/klipper-helm:v0.7.3-build20220613
> Loaded image: rancher/klipper-lb:v0.3.5
> Loaded image: rancher/local-path-provisioner:v0.0.23
> Loaded image: rancher/mirrored-coredns-coredns:1.9.4
> Loaded image: rancher/mirrored-library-busybox:1.34.1
> Loaded image: rancher/mirrored-library-traefik:2.9.4
TRAC[0047] cmd ["lima" "cp" "/Users/aelsnz/Library/Caches/colima/caches/f6ef38f86e38c46b8ebf7cd9fac18ebe516138cd80c89f3a29de66e4f8d4d8a9" "/tmp/k3s-install.sh"]
TRAC[0047] cmd ["lima" "sudo" "install" "/tmp/k3s-install.sh" "/usr/local/bin/k3s-install.sh"]
TRAC[0047] cmd ["lima" "sh" "-c" "INSTALL_K3S_SKIP_DOWNLOAD=true INSTALL_K3S_SKIP_ENABLE=true k3s-install.sh --write-kubeconfig-mode 644 --resolv-conf /etc/resolv.conf --disable traefik --bind-address --advertise-address --flannel-iface col0 --docker"]
> [INFO] Skipping k3s download and verify
> [INFO] Skipping installation of SELinux RPM
> [INFO] Creating /usr/local/bin/kubectl symlink to k3s
> [INFO] Creating /usr/local/bin/crictl symlink to k3s
> [INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
> [INFO] Creating killall script /usr/local/bin/k3s-killall.sh
> [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
> [INFO] env: Creating environment file /etc/rancher/k3s/k3s.env
> [INFO] openrc: Creating service file /etc/init.d/k3s
TRAC[0047] cmd ["lima" "sudo" "mkdir" "-p" "/etc/cni/net.d"]
TRAC[0047] cmd ["lima" "sudo" "mkdir" "-p" "/etc/cni/net.d"]
TRAC[0047] cmd int ["lima" "sudo" "sh" "-c" "cat > /etc/cni/net.d/10-flannel.conflist"]
TRAC[0047] cmd ["lima" "sudo" "cat" "/etc/colima/colima.json"]
TRAC[0047] cmd ["lima" "sudo" "mkdir" "-p" "/etc/colima"]
TRAC[0047] cmd ["lima" "sudo" "mkdir" "-p" "/etc/colima"]
TRAC[0047] cmd int ["lima" "sudo" "sh" "-c" "cat > /etc/colima/colima.json"]
INFO[0047] starting ... context=kubernetes
TRAC[0047] cmd ["lima" "sudo" "service" "k3s" "status"]
TRAC[0048] cmd ["lima" "sudo" "service" "k3s" "start"]
> * Caching service dependencies ... [ ok ]
> * Starting k3s ... [ ok ]
TRAC[0048] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0051] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0053] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0055] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0057] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0059] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0061] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0064] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0066] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0068] cmd ["lima" "kubectl" "cluster-info"]
TRAC[0070] cmd ["lima" "kubectl" "cluster-info"]
FATA[0070] error starting kubernetes: error running [lima kubectl cluster-info], output: "The connection to the server localhost:8080 was refused - did you specify the right host or port?", err: "exit status 1"
@aelsnz it is not recommended to switch vm type of an existing VM. You can either delete and re-create or use another profile.
And as it does not seem to finish properly the kubectl context is not set, and I cannot get a colima status to work.
$ docker context list
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
colima * moby colima unix:///Users/aelsnz/.colima/default/docker.sock
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
desktop-linux moby unix:///Users/aelsnz/.docker/run/docker.sock
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
$ colima status
FATA[0000] error retrieving current runtime: empty value
$ colima list
PROFILE STATUS ARCH CPUS MEMORY DISK RUNTIME ADDRESS
default Running aarch64 4 8GiB 20GiB docker+k3s
$ colima ssh
colima:/Users/aelsnz$ ifconfig col0
col0 Link encap:Ethernet HWaddr 52:55:55:A2:EA:9D
inet6 addr: fd66:cebe:fcc3:1333:5055:55ff:fea2:ea9d/64 Scope:Global
inet6 addr: fe80::5055:55ff:fea2:ea9d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:95 errors:0 dropped:0 overruns:0 frame:0
TX packets:53 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:33717 (32.9 KiB) TX bytes:16334 (15.9 KiB)
colima:/Users/aelsnz$ tail -15 /var/log/k3s.log
time="2023-04-03T04:16:52Z" level=fatal msg="starting kubernetes: preparing server: init cluster datastore and https: listen tcp: lookup --advertise-address: no such host"
time="2023-04-03T04:16:57Z" level=info msg="Found ip fd66:cebe:fcc3:1333:5055:55ff:fea2:ea9d from iface col0"
time="2023-04-03T04:16:57Z" level=info msg="Starting k3s v1.25.4+k3s1 (0dc63334)"
time="2023-04-03T04:16:57Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2023-04-03T04:16:57Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2023-04-03T04:16:57Z" level=info msg="Database tables and indexes are up to date"
time="2023-04-03T04:16:57Z" level=info msg="Kine available at unix://kine.sock"
time="2023-04-03T04:16:59Z" level=fatal msg="starting kubernetes: preparing server: init cluster datastore and https: listen tcp: lookup --advertise-address: no such host"
time="2023-04-03T04:17:04Z" level=info msg="Found ip fd66:cebe:fcc3:1333:5055:55ff:fea2:ea9d from iface col0"
time="2023-04-03T04:17:04Z" level=info msg="Starting k3s v1.25.4+k3s1 (0dc63334)"
time="2023-04-03T04:17:04Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2023-04-03T04:17:04Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2023-04-03T04:17:04Z" level=info msg="Database tables and indexes are up to date"
time="2023-04-03T04:17:04Z" level=info msg="Kine available at unix://kine.sock"
time="2023-04-03T04:17:06Z" level=fatal msg="starting kubernetes: preparing server: init cluster datastore and https: listen tcp: lookup --advertise-address: no such host"
colima:/Users/aelsnz$
I am not switching an existing VM, I am deleting and recreating these VMs.
I have used: colima start.........
Then I use: colima delete -f ......
I am also using profiles the whole time, but latest example was using just default.
Thanks for the feedback. This is news to me as I have actually not experienced network address issues with the vz vm before. I will dig into it a bit more.
Thank you, this is really strange one.
I have used colima mainly with QEMU on the M1 as that worked really well and I had issues with the "vz" a few months ago.
The key is to enable Kubernetes as that is when the address gets generated for col0 - but since the update to Mac OS 13.3 I cannot get vz or qemu VMs to work with kubernetes which is a bit of a showstopper at moment.
I am busy looking into the scripts/backend a bit more, but it is as if the col0 network is not creating properly, but I cannot find any specific errors.
thank you for looking into this one.
If you start without --network-address
or --network-address=false
it would work fine.
The main issue with that is that if you use LoadBalancer
, you would not get a separate IP but rather the port would be available on localhost.
Ok, quick update, I saw this in the k3s log
...
time="2023-04-03T04:17:04Z" level=info msg="Found ip fd66:cebe:fcc3:1333:5055:55ff:fea2:ea9d from iface col0"
...
which made me think that IPV6 is causing the issues here. If I look at ifconfig on the VM I saw:
inet6 addr: fd66:cebe:fcc3:1333:5055:55ff:fea2:ea9d/64 Scope:Global
inet6 addr: fe80::5055:55ff:fea2:ea9d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:95 errors:0 dropped:0 overruns:0 frame:0
TX packets:53 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:33717 (32.9 KiB) TX bytes:16334 (15.9 KiB)
Note there is the inet6 addr....
Under Mac, I went to my network, selected the interface, goto TCP/IP settings and set "Configure IPv6" to "Link-Local Only" - reboot and it is now working!
$ ifconfig col0
col0 Link encap:Ethernet HWaddr 52:55:55:A2:EA:9D
inet addr:192.168.106.17 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fde2:fcb5:a448:d160:5055:55ff:fea2:ea9d/64 Scope:Global
inet6 addr: fe80::5055:55ff:fea2:ea9d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:648 errors:0 dropped:0 overruns:0 frame:0
TX packets:519 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:70096 (68.4 KiB) TX bytes:424208 (414.2 KiB)
$ colima start -c4 -d20 -m8 -k --network-address
INFO[0000] starting colima
INFO[0000] runtime: docker+k3s
INFO[0000] preparing network ... context=vm
INFO[0003] creating and starting ... context=vm
INFO[0037] provisioning ... context=docker
INFO[0039] starting ... context=docker
INFO[0044] provisioning ... context=kubernetes
INFO[0044] downloading and installing ... context=kubernetes
INFO[0050] loading oci images ... context=kubernetes
INFO[0055] starting ... context=kubernetes
INFO[0058] updating config ... context=kubernetes
INFO[0058] Switched to context "colima". context=kubernetes
INFO[0059] done
$ colima list
PROFILE STATUS ARCH CPUS MEMORY DISK RUNTIME ADDRESS
default Running aarch64 4 8GiB 20GiB docker+k3s 192.168.106.17
In my use case I ideally need the loadBalancer IP and using localhost is not ideal - so this is good to get the address.
I am not expert in IPv6, but feels like maybe the priority above ipv4 could have played a role here.
Now the question is can we adjust the underlying VM to only use IPv4 or to prioritise IPv4 or disable IPv6 maybe.
Would be interesting if others see this - I will get test on another setup and report back.
Interesting... that might actually be the issue.
Yeah, it should be possible to disable ipv6 in the VM. However, I think issue is from the host.
My installation of colima has been really flakey over the past couple of weeks and today it was nearly unusable - the VM just stopped responding and I'd have to do a killall limactl
and start it over and over. Having seen this post, and in particular:
Under Mac, I went to my network, selected the interface, goto TCP/IP settings and set "Configure IPv6" to "Link-Local Only" - reboot and it is now working!
I did the same and I've had no problems.
Not running k8s, but ddev, on an M1 Macbook.
Just another update, tried on another Mac 13.3 update on M1 and even if we set the network to Link-Local only
and reboot Mac, still seem to have issue where the "col0" network interface in the VM does not get an IP from dhcp.
k3s.log in VM shows:
k3s.log:time="2023-04-03T19:26:58Z" level=warning msg="unable to get global unicast ip from interface name: can't find ip for interface col0"
and lima-init.log
shows: udhcpc failed to get a DHCP lease - see below.
* Starting networking ...
* lo ... [ ok ]
* eth0 ...udhcpc: started, v1.35.0
udhcpc: broadcasting discover
udhcpc: broadcasting select for 192.168.5.15, server 192.168.5.2
udhcpc: lease of 192.168.5.15 obtained from 192.168.5.2, lease time 86400
[ ok ]
* col0 ...udhcpc: started, v1.35.0
udhcpc: broadcasting discover
udhcpc: broadcasting discover
udhcpc: broadcasting discover
udhcpc: broadcasting discover
udhcpc: broadcasting discover
udhcpc failed to get a DHCP lease
udhcpc: no lease, forking to background
[ ok ]
* eth2 ... [ ok ]
I have been looking at this more and I think this issue discussed here is related - https://github.com/lima-vm/lima/issues/1259
It looks like bootp - dhcp is not working properly as it is not assigning dhcp address. But if you start a manual DHCP server on bridge100 it works.
@aelsnz thanks for taking time to troubleshoot this issue, that indeed looks like the case.
I did remember that there were minimal vmnet issues prior to macOS ventura.
I think the issue applies even if we use --vm-type vz
as it is also starting a bridge network and using DHCP and also shows exact same symptoms - not sure if anyone else is seeing this but I know of few people now that has updated to latest Mac OS 13.x updates having issues and it seem that something is different regarding bootpd
- I posted comment here - https://github.com/lima-vm/lima/issues/1259#issuecomment-1509329043
Again I did adjust ipv6 settings ("Configure IPv6" to "Link-Local Only"), had to reboot couple times and then things started up properly and I can create environments again.
also to few if DCHP is working you can run below, if things work, you see the DHCP requests, otherwise nothing happens.
Example, run below, then start colima with something like: colima start -k -c4 -m6 -d20 --network-address
sudo log stream --process bootpd --info --debug
Been thinking about this and wondering, can we maybe add option to specify fixed IP address to be used? Maybe extend the --network-address option to allow specify static IP in the 192.168.106.0/24 subnet instead of waiting on DHCP. That way we can eliminate the issue with bootpd/dhcp not always giving IP and assign fixed IP address. Having this configurable will be huge and can be big advantage - and cuts down on this issue which could be a huge help to many.
Quick update on last comment, I think this might help to have option to pass in static IP to be used.
To test the concept I modified https://github.com/abiosoft/colima/blob/main/embedded/network/ifaces.sh
script to see if an environment variable COLIMA_IP as example is set, if it is, it will use it and update the network interface col0
to use static ip using this environment variable. Not perfect way, but when I then run a test like this it works:
colima start -c 1 -d 10 -m 2 --network-address --env COLIMA_IP=192.168.106.201
INFO[0000] starting colima
INFO[0000] runtime: docker
INFO[0000] preparing network ... context=vm
INFO[0001] creating and starting ... context=vm
INFO[0032] provisioning ... context=docker
INFO[0032] starting ... context=docker
INFO[0038] done
$ colima list
PROFILE STATUS ARCH CPUS MEMORY DISK RUNTIME ADDRESS
default Running aarch64 1 2GiB 10GiB docker 192.168.106.201
Busy testing this more and will provide more feedback once done.
@aelsnz I'm experiencing something where colima doesn't get an IP. But it's intermittent - it persists across restarts, yet sometimes starts working. I run Ventura 13.4. Now, the reason I'm adding it here rather than opening a new issue, is that you might be interested to know that the static IP option you added is flaky for me in the exact same way DHCP is! When networking fails, they both fail to work. I know that in https://github.com/lima-vm/lima/issues/1259 you folks narrowed it down to the bootpd
daemon but have a look at this:
colima version
colima version v0.5.4-29-g20ba980
git commit: 20ba980d963a36cb71c5844c80caf6bcee13d7cd # that's your PR being merged
colima --very-verbose start
The verbose start log is here.
# this is repeating every 5s in /var/log/k3s.log while we're stuck
time="2023-06-14T17:25:50Z" level=info msg="Found ip fd67:3eee:b3a6:f1f6:5055:55ff:fe20:f385 from iface col0"
time="2023-06-14T17:25:50Z" level=info msg="Starting k3s v1.26.4+k3s1 (8d0255af)"
time="2023-06-14T17:25:50Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2023-06-14T17:25:50Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2023-06-14T17:25:50Z" level=info msg="Database tables and indexes are up to date"
time="2023-06-14T17:25:50Z" level=info msg="Kine available at unix://kine.sock"
time="2023-06-14T17:25:50Z" level=info msg="Reconciling bootstrap data between datastore and disk"
time="2023-06-14T17:25:52Z" level=fatal msg="starting kubernetes: preparing server: init cluster datastore and https: listen tcp: lookup --advertise-address: no such host"
colima profile config here. (Note COLIMA_IP in the env section.)
Alright, let's check if the var is set inside the VM:
$ colima ssh
colima:/path$ echo $COLIMA_IP
192.168.106.201
When it works, the k3s log says that it's bound successfully to the advertised IP 192.168.106.201.
If I remove the env var, it fails to start in the exact same way using DHCP. The k3s.log error stays the same, the col0 interface doesn't have an IPv4.
I've tried setting IPv6 to Link-local only in the wifi settings, doesn't work either.
So, what if this problem just looks like DHCP, but in reality is something else? What if colima is unable to assign an ipv4 to the bridge interface, even when explicitly asked to do so?
The other option is the functionality you merged in is just not working on occasion, it's not honouring the env var. However, I've not done anything to the machine except restart it - I've not changed the colima config, colima or lima, and it worked before (I was super thankful to see it merged in!). colima start testit
to test a clean profile hangs in the same spot, same msg in k3s.log, and I'm not even asking for --network-address (since it's off by default on new profiles).
Edit: My binary definitely has your static IP functionality:
strings $(which colima) | grep "update_iface_to_static"
update_iface_to_static() {
update_iface_to_static $IFACE $COLIMA_IP $FILE
Edit: I'd really like to use this project and get my team on it as I like it much more than minikube on Mac. Happy to share any info or have a call to try to troubleshoot this if you know how or want info from another Mac. It's an M1 MBP 2021.
I received this error while connected to ExpressVPN. When starting colima with --network-address
, try disconnecting from your VPN If you are getting the error reported in this issue:
TRAC[0111] cmd ["lima" "kubectl" "cluster-info"]
FATA[0111] error starting kubernetes: error running [lima kubectl cluster-info], output: "The connection to the server localhost:8080 was refused - did you specify the right host or port?", err: "exit status 1"
I deleted default profile and started again and it worked
colima delete default
colima start --kubernetes
Description
When updating to the latest Mac OS 13.3 I cannot create a colima environment with k3s.
Getting:
Tested on another Mac M1 following upgrade to 13.3 and same issue.
It looks like the network "col0" is not created.
When ssh into colima profile I get:
ifconfig output:
Example create command specified:
Version
Colima Version: 0.5.4 Lima Version: 0.15.0 Qemu Version: 7.2.0
Operating System
Output of
colima status
Reproduction Steps
Expected behaviour
Should get a deployed k3s environment. Looks like issue with the
col0
network not being created.Additional context
No response