Closed evandandrea closed 2 years ago
I think this is the relevant bit:
[2018-11-15T06:38:58.679] [info] [snapcraft-offlineimap] virtio_net: Could not create vmnet interface, permission denied or no entitlement?
Can you check what user is the multipassd
process running as? Somehow it doesn't have permissions to handle networking... Maybe try reinstalling Multipass?
Running the command again without any changes produced a working VM. Whatever this is, it's intermittent.
https://gist.github.com/evandandrea/4d80230ca13141d703a7ad3a0203084e
multipassd is running as root:
❯ ps -fax | grep multip offlineimap/git/master !
0 2796 1 0 4:18pm ?? 0:34.39 /Library/Application Support/com.canonical.multipass/bin/multipassd
501 6521 934 0 9:43am ttys001 0:00.00 grep --color=auto multip
Just saw this again with a different, long-standing instance: https://gist.github.com/evandandrea/6cfd4359aaa66160caeaf4e8806cbf80
There are actually two things there:
start
/launch
commands should return straight away after the hyperkit process exitedCan you think of anything in common the two instances could have had? Did you lack networking altogether when you tried it maybe? Or VPN running or some such? Little Snitch?
No VPN, no fancy networking. Connectivity was fine at the time, or so it appeared.
vmnet is definitely there:
ls -la /Library/Preferences/SystemConfiguration/com.apple.vmnet.plist snapcraft/git/test-snap-pushes
-rw-r----- 1 root wheel 13870 15 Nov 18:21 /Library/Preferences/SystemConfiguration/com.apple.vmnet.plist
This is on macOS Mojave (10.14).
I'll keep on the look out for a pattern.
From my own Mojave box (which also fails to launch VMs in multipass):
[2018-12-10T15:17:27.151] [info] [rpc] gRPC listening on unix:/var/run/multipass_socket, SSL:on
[2018-12-10T15:17:27.154] [warning] [Qt] QIODevice::write (QFile, "/var/root/Library/Caches/multipassd/vault/multipassd-image-records.json"): device not open
[2018-12-10T15:19:35.739] [info] [snapcraft-gimp] process program '/Library/Application Support/com.canonical.multipass/bin/hyperkit'
[2018-12-10T15:19:35.739] [info] [snapcraft-gimp] process arguments '-c, 2, -m, 2G, -u, -A, -H, -U, 743ab23f-e505-3616-9491-a93d675db0ac, -s, 0:0,hostbridge, -s, 2:0,virtio-net, -s, 5,virtio-rnd, -s, 31,lpc, -l, com1,autopty=/var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/pty,log=/Library/Logs/Multipass/snapcraft-gimp-hyperkit.log, -s, 1:0,ahci-hd,file:///var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/ubuntu-18.04-minimal-cloudimg-amd64.img?sync=os&buffered=1,format=qcow,qcow-config=discard=true;compact_after_unmaps=0;keep_erased=0;runtime_asserts=false, -s, 1:1,ahci-cd,/var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/cloud-init-config.iso, -f, kexec,/var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/ubuntu-18.04-server-cloudimg-amd64-vmlinuz-generic,/var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/ubuntu-18.04-server-cloudimg-amd64-initrd-generic,earlyprintk=serial console=ttyS0 root=/dev/sda1 rw panic=1 no_timer_check'
[2018-12-10T15:19:35.739] [info] [snapcraft-gimp] setting up watch for PTY in dir /var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp
[2018-12-10T15:19:35.740] [info] [snapcraft-gimp] process state changed to Starting
[2018-12-10T15:19:35.741] [info] [snapcraft-gimp] process state changed to Running
[2018-12-10T15:19:35.764] [info] [snapcraft-gimp] mirage_block_open: block_config = file:///var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/ubuntu-18.04-minimal-cloudimg-amd64.img?sync=os&buffered=1 and qcow_config = discard=true;compact_after_unmaps=0;keep_erased=0;runtime_asserts=false and stats_config = None
[2018-12-10T15:19:35.765] [info] [snapcraft-gimp] hyperkit: [INFO] rounding up file to a whole number of clusters (= 974592 sectors)
[2018-12-10T15:19:35.777] [info] [snapcraft-gimp] hyperkit: [INFO] Resized file to 7614 clusters (974592 sectors)
[2018-12-10T15:19:35.777] [info] [snapcraft-gimp] hyperkit: [INFO] discard requested and lazy_refcounts is disabled: erasing refcount table and enabling lazy_refcounts
[2018-12-10T15:19:35.780] [info] [snapcraft-gimp] hyperkit: [INFO] image has 0 free sectors and 7610 used sectors
[2018-12-10T15:19:35.780] [info] [snapcraft-gimp] mirage_block_open: block_config = file:///var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/ubuntu-18.04-minimal-cloudimg-amd64.img?sync=os&buffered=1 and qcow_config = discard=true;compact_after_unmaps=0;keep_erased=0;runtime_asserts=false and stats_config = None returning 0
mirage_block_stat
[2018-12-10T15:19:35.780] [info] [snapcraft-gimp] fcntl(F_PUNCHHOLE) failed: host filesystem does not support sparse files: Operation not permitted
[2018-12-10T15:19:36.405] [info] [snapcraft-gimp] rdmsr to register 0x140 on vcpu 0
[2018-12-10T15:19:36.740] [info] [snapcraft-gimp] hyperkit PTY found, opening /var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/pty
[2018-12-10T15:19:36.740] [info] [snapcraft-gimp] hyperkit PTY found, opening /var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/pty
[2018-12-10T15:19:36.781] [info] [snapcraft-gimp] rdmsr to register 0x140 on vcpu 1
[2018-12-10T15:19:37.733] [info] [snapcraft-gimp] rdmsr to register 0x64e on vcpu 0
[2018-12-10T15:19:37.733] [info] [snapcraft-gimp] rdmsr to register 0x34 on vcpu 0
[2018-12-10T15:19:41.271] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: 7610 used; 0 junk; 0 erased; 0 available; 0 copies; 0 roots; 0 Copying; 0 Copied; 0 Flushed; 0 Referenced; max_cluster = 7613
[2018-12-10T15:19:41.272] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: file contains cluster 0 .. 7613 will enlarge file to 0 .. 8125
[2018-12-10T15:19:41.272] [info] [snapcraft-gimp] hyperkit: [INFO] resize: adding available clusters (Node ((x 7614) (y 8125) (l Empty) (r Empty) (h 1) (cardinal 512)))
[2018-12-10T15:20:16.610] [error] [snapcraft-gimp] Unable to determine IP address
[2018-12-10T15:24:34.896] [info] [snapcraft-gimp] Shutdown request delayed for 10 minutes
[2018-12-10T15:26:23.186] [info] [snapcraft-gimp] sending shutdown signal to hyperkit process, waiting for it to shutdown...
[2018-12-10T15:26:38.261] [info] [snapcraft-gimp] hyperkit not responding to shutdown signal, killing it
[2018-12-10T15:26:38.261] [warning] [Qt] QProcess: Destroyed while process ("/Library/Application Support/com.canonical.multipass/bin/hyperkit") is still running.
[2018-12-10T15:26:38.265] [error] [snapcraft-gimp] process error occurred Crashed
[2018-12-10T15:26:38.265] [warning] [Qt] QSocketNotifier: Socket notifiers cannot be enabled or disabled from another thread
@diddledan can you have a look in Console, look in /Library/Logs/Multipass
, there should be snapcraft-gimp.log
with the whole Linux console log of the instance, try and see if you can find why it didn't obtain an IP address?
Do you have any firewall set up? VPN? Little Snitch? We've seen those play tricks with the instance networking.
I don't have any VPN or Little Snitch set up, and the macOS firewall is disabled - the default state from macOS installation-time. According to the output in snapcraft-gimp-hyperkit.log
, which I assume is the log file you wanted me to check, cloud-init
correctly gets an IP address for the box:
[ 10.735694] cloud-init[730]: Cloud-init v. 18.4-0ubuntu1~18.04.1 running 'init' at Mon, 10 Dec 2018 15:26:50 +0000. Up 10.54 seconds.
[ 10.738299] cloud-init[730]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++++++++++++
[ 10.740413] cloud-init[730]: ci-info: +--------+------+----------------------------+---------------+--------+-------------------+
[ 10.742517] cloud-init[730]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address |
[ 10.744566] cloud-init[730]: ci-info: +--------+------+----------------------------+---------------+--------+-------------------+
[ 10.746845] cloud-init[730]: ci-info: | enp0s2 | True | 192.168.64.2 | 255.255.255.0 | global | 9a:04:00:08:2a:5d |
[ 10.748786] cloud-init[730]: ci-info: | enp0s2 | True | fe80::9804:ff:fe08:2a5d/64 | . | link | 9a:04:00:08:2a:5d |
[ 10.751192] cloud-init[730]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . |
[ 10.753357] cloud-init[730]: ci-info: | lo | True | ::1/128 | . | host | . |
[ 10.755555] cloud-init[730]: ci-info: +--------+------+----------------------------+---------------+--------+-------------------+
[ 10.757545] cloud-init[730]: ci-info: +++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++
[ 10.759421] cloud-init[730]: ci-info: +-------+--------------+--------------+-----------------+-----------+-------+
[ 10.761204] cloud-init[730]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags |
[ 10.763179] cloud-init[730]: ci-info: +-------+--------------+--------------+-----------------+-----------+-------+
[ 10.765343] cloud-init[730]: ci-info: | 0 | 0.0.0.0 | 192.168.64.1 | 0.0.0.0 | enp0s2 | UG |
[ 10.767389] cloud-init[730]: ci-info: | 1 | 192.168.64.0 | 0.0.0.0 | 255.255.255.0 | enp0s2 | U |
[ 10.769714] cloud-init[730]: ci-info: | 2 | 192.168.64.1 | 0.0.0.0 | 255.255.255.255 | enp0s2 | UH |
[ 10.772188] cloud-init[730]: ci-info: +-------+--------------+--------------+-----------------+-----------+-------+
[ 10.774972] cloud-init[730]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
[ 10.776494] cloud-init[730]: ci-info: +-------+-------------+---------+-----------+-------+
[ 10.778099] cloud-init[730]: ci-info: | Route | Destination | Gateway | Interface | Flags |
[ 10.779540] cloud-init[730]: ci-info: +-------+-------------+---------+-----------+-------+
[ 10.780980] cloud-init[730]: ci-info: | 1 | fe80::/64 | :: | enp0s2 | U |
[ 10.782283] cloud-init[730]: ci-info: | 3 | local | :: | enp0s2 | U |
[ 10.783928] cloud-init[730]: ci-info: | 4 | ff00::/8 | :: | enp0s2 | U |
[ 10.785636] cloud-init[730]: ci-info: +-------+-------------+---------+-----------+-------+
@diddledan can you ping the address? Does ssh 192.168.64.2
respond?
yes, ping and ssh can reach the machine.
$ ping 192.168.64.2
PING 192.168.64.2 (192.168.64.2): 56 data bytes
64 bytes from 192.168.64.2: icmp_seq=0 ttl=64 time=0.301 ms
64 bytes from 192.168.64.2: icmp_seq=1 ttl=64 time=0.181 ms
64 bytes from 192.168.64.2: icmp_seq=2 ttl=64 time=0.168 ms
64 bytes from 192.168.64.2: icmp_seq=3 ttl=64 time=0.225 ms
^C
--- 192.168.64.2 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.168/0.219/0.301/0.052 ms
$ ssh 192.168.64.2
The authenticity of host '192.168.64.2 (192.168.64.2)' can't be established.
ECDSA key fingerprint is SHA256:drfzO+nqWRYXa/YZ1+ug1Ic80ZgWSbY/X0aENb1Njls.
Are you sure you want to continue connecting (yes/no)? ^C
@diddledan so what does multipass list
say? Did it find the instance IP after all? Does multipass shell snapcraft-gimp
work? It looks like a temporary issue that got itself resolved already...
$ multipass list
Name State IPv4 Release
snapcraft-gimp RUNNING 192.168.64.2 Ubuntu Snapcraft builder for Core 18
However, I might have confused matters by trying a multipass start snapcraft-gimp
after the first log I posted. This is the log from the second attempt:
libc++abi.dylib: Pure virtual function called!
[2018-12-10T15:26:38.442] [info] [rpc] gRPC listening on unix:/var/run/multipass_socket, SSL:on
[2018-12-10T15:26:38.444] [info] [daemon] snapcraft-gimp needs starting. Starting now...
[2018-12-10T15:26:38.486] [info] [snapcraft-gimp] process program '/Library/Application Support/com.canonical.multipass/bin/hyperkit'
[2018-12-10T15:26:38.486] [info] [snapcraft-gimp] process arguments '-c, 2, -m, 2G, -u, -A, -H, -U, 743ab23f-e505-3616-9491-a93d675db0ac, -s, 0:0,hostbridge, -s, 2:0,virtio-net, -s, 5,virtio-rnd, -s, 31,lpc, -l, com1,autopty=/var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/pty,log=/Library/Logs/Multipass/snapcraft-gimp-hyperkit.log, -s, 1:0,ahci-hd,file:///var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/ubuntu-18.04-minimal-cloudimg-amd64.img?sync=os&buffered=1,format=qcow,qcow-config=discard=true;compact_after_unmaps=0;keep_erased=0;runtime_asserts=false, -s, 1:1,ahci-cd,/var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/cloud-init-config.iso, -f, kexec,/var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/ubuntu-18.04-server-cloudimg-amd64-vmlinuz-generic,/var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/ubuntu-18.04-server-cloudimg-amd64-initrd-generic,earlyprintk=serial console=ttyS0 root=/dev/sda1 rw panic=1 no_timer_check'
[2018-12-10T15:26:38.486] [info] [snapcraft-gimp] setting up watch for PTY in dir /var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp
[2018-12-10T15:26:38.487] [info] [snapcraft-gimp] process state changed to Starting
[2018-12-10T15:26:38.489] [info] [snapcraft-gimp] process state changed to Running
[2018-12-10T15:26:38.524] [info] [snapcraft-gimp] mirage_block_open: block_config = file:///var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/ubuntu-18.04-minimal-cloudimg-amd64.img?sync=os&buffered=1 and qcow_config = discard=true;compact_after_unmaps=0;keep_erased=0;runtime_asserts=false and stats_config = None
mirage_block_open: block_config = file:///var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/ubuntu-18.04-minimal-cloudimg-amd64.img?sync=os&buffered=1 and qcow_config = discard=true;compact_after_unmaps=0;keep_erased=0;runtime_asserts=false and stats_config = None returning 0
mirage_block_stat
[2018-12-10T15:26:38.524] [info] [snapcraft-gimp] hyperkit: [INFO] Resized file to 7615 clusters (974720 sectors)
[2018-12-10T15:26:38.525] [info] [snapcraft-gimp] hyperkit: [INFO] image has 0 free sectors and 7611 used sectors
[2018-12-10T15:26:38.525] [info] [snapcraft-gimp] fcntl(F_PUNCHHOLE) failed: host filesystem does not support sparse files: Operation not permitted
[2018-12-10T15:26:38.973] [info] [snapcraft-gimp] rdmsr to register 0x140 on vcpu 0
[2018-12-10T15:26:39.311] [info] [snapcraft-gimp] rdmsr to register 0x140 on vcpu 1
[2018-12-10T15:26:39.565] [info] [snapcraft-gimp] hyperkit PTY found, opening /var/root/Library/Application Support/multipassd/vault/instances/snapcraft-gimp/pty
[2018-12-10T15:26:40.224] [info] [snapcraft-gimp] rdmsr to register 0x64e on vcpu 1
[2018-12-10T15:26:40.224] [info] [snapcraft-gimp] rdmsr to register 0x34 on vcpu 1
[2018-12-10T15:26:44.067] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: 7611 used; 0 junk; 0 erased; 0 available; 0 copies; 0 roots; 0 Copying; 0 Copied; 0 Flushed; 0 Referenced; max_cluster = 7614
[2018-12-10T15:26:44.067] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: file contains cluster 0 .. 7614 will enlarge file to 0 .. 8126
[2018-12-10T15:26:44.067] [info] [snapcraft-gimp] hyperkit: [INFO] resize: adding available clusters (Node ((x 7615) (y 8126) (l Empty) (r Empty) (h 1) (cardinal 512)))
[2018-12-10T15:26:50.367] [info] [snapcraft-gimp] IP address found: 192.168.64.2
[2018-12-10T15:26:51.753] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: 7868 used; 0 junk; 0 erased; 255 available; 0 copies; 0 roots; 0 Copying; 0 Copied; 0 Flushed; 0 Referenced; max_cluster = 7871
[2018-12-10T15:26:51.753] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: file contains cluster 0 .. 8126 will enlarge file to 0 .. 8638
[2018-12-10T15:26:51.753] [info] [snapcraft-gimp] hyperkit: [INFO] resize: adding available clusters (Node ((x 8127) (y 8638) (l Empty) (r Empty) (h 1) (cardinal 512)))
[2018-12-10T15:27:05.313] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: 8380 used; 0 junk; 0 erased; 255 available; 0 copies; 0 roots; 0 Copying; 0 Copied; 0 Flushed; 0 Referenced; max_cluster = 8383
[2018-12-10T15:27:05.313] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: file contains cluster 0 .. 8638 will enlarge file to 0 .. 9150
[2018-12-10T15:27:05.313] [info] [snapcraft-gimp] hyperkit: [INFO] resize: adding available clusters (Node ((x 8639) (y 9150) (l Empty) (r Empty) (h 1) (cardinal 512)))
[2018-12-10T15:27:08.864] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: 8892 used; 0 junk; 0 erased; 255 available; 0 copies; 0 roots; 0 Copying; 0 Copied; 0 Flushed; 0 Referenced; max_cluster = 8895
[2018-12-10T15:27:08.864] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: file contains cluster 0 .. 9150 will enlarge file to 0 .. 9662
[2018-12-10T15:27:08.864] [info] [snapcraft-gimp] hyperkit: [INFO] resize: adding available clusters (Node ((x 9151) (y 9662) (l Empty) (r Empty) (h 1) (cardinal 512)))
[2018-12-10T15:27:08.932] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: 9404 used; 0 junk; 0 erased; 255 available; 0 copies; 0 roots; 0 Copying; 0 Copied; 0 Flushed; 0 Referenced; max_cluster = 9407
[2018-12-10T15:27:08.932] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: file contains cluster 0 .. 9662 will enlarge file to 0 .. 10174
[2018-12-10T15:27:08.932] [info] [snapcraft-gimp] hyperkit: [INFO] resize: adding available clusters (Node ((x 9663) (y 10174) (l Empty) (r Empty) (h 1) (cardinal 512)))
[2018-12-10T15:27:14.229] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: 9916 used; 0 junk; 0 erased; 255 available; 0 copies; 0 roots; 0 Copying; 0 Copied; 0 Flushed; 0 Referenced; max_cluster = 9919
[2018-12-10T15:27:14.229] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: file contains cluster 0 .. 10174 will enlarge file to 0 .. 10686
[2018-12-10T15:27:14.229] [info] [snapcraft-gimp] hyperkit: [INFO] resize: adding available clusters (Node ((x 10175) (y 10686) (l Empty) (r Empty) (h 1) (cardinal 512)))
[2018-12-10T15:27:14.288] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: 10428 used; 0 junk; 0 erased; 255 available; 0 copies; 0 roots; 0 Copying; 0 Copied; 0 Flushed; 0 Referenced; max_cluster = 10431
[2018-12-10T15:27:14.288] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: file contains cluster 0 .. 10686 will enlarge file to 0 .. 11198
[2018-12-10T15:27:14.288] [info] [snapcraft-gimp] hyperkit: [INFO] resize: adding available clusters (Node ((x 10687) (y 11198) (l Empty) (r Empty) (h 1) (cardinal 512)))
[2018-12-10T15:27:45.018] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: 10940 used; 0 junk; 0 erased; 255 available; 0 copies; 0 roots; 0 Copying; 0 Copied; 0 Flushed; 0 Referenced; max_cluster = 10943
[2018-12-10T15:27:45.018] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: file contains cluster 0 .. 11198 will enlarge file to 0 .. 11710
[2018-12-10T15:27:45.018] [info] [snapcraft-gimp] hyperkit: [INFO] resize: adding available clusters (Node ((x 11199) (y 11710) (l Empty) (r Empty) (h 1) (cardinal 512)))
[2018-12-10T15:27:45.084] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: 11452 used; 0 junk; 0 erased; 255 available; 0 copies; 0 roots; 0 Copying; 0 Copied; 0 Flushed; 0 Referenced; max_cluster = 11455
[2018-12-10T15:27:45.084] [info] [snapcraft-gimp] hyperkit: [INFO] Allocator: file contains cluster 0 .. 11710 will enlarge file to 0 .. 12222
[2018-12-10T15:27:45.084] [info] [snapcraft-gimp] hyperkit: [INFO] resize: adding available clusters (Node ((x 11711) (y 12222) (l Empty) (r Empty) (h 1) (cardinal 512)))
Prior to running multipass launch snapcraft-gimp
I got output from multipass list
that reported the state
as UNKNOWN
. (I don't have that still available in my terminal backlock to copy verbatim)
I can't reproduce the issue with further attempts. It might be worth mentioning that the failed attempt was the very first attempt after installation of multipass and snapcraft on a virgin macOS installation.
also worth noting, I installed both multipass and snapcraft through homebrew:
brew cask install multipass
brew install snapcraft
I hate to hijack this thread, but I'm also experiencing this exact same issue and I've been struggling to find any info the problem. However I AM using a VPN. Is there a work around in this case? Or just turn off my VPN?
@ryanjyoder we have heard of VPN messing with the VM networking, what kind of VPN are you using?
I can confirm turning off my vpn fixed this problem for me. I'm using openvpn with tunnelblick as my client. However, I do frequently need to use a vpn so it would be great to get them working together.
IIRC I've been able to reproduce this when the VPN I was connecting to was meant to be the default gateway (i.e. all traffic going through the VPN). I believe tunnelblick makes it so no traffic can then go direct to the internet, which means traffic from the instances gets dropped. Ideally it would instead go through the VPN, but I'm not sure Multipass itself should interfere with that. Maybe there's some configuration in tunnelblick one could use to route the instance traffic through the VPN. Ideas welcome.
Not sure if this is a workaround or solution, but I was able to make this work by nat-ing the multipass bridge with my vpn interface.
I added nat on utun1 from bridge100:network to any -> (utun1)
to file /etc/pf.conf
. Then I reloaded the file: $ sudo pfctl -f /etc/pf.conf
.
(Just built my first snap on mac 😃)
@ryanjyoder awesome, thanks for that! We'll add that to our notes.
I'll just note that it's important to put the above line in the correct place in the file - after the other nat- line, otherwise it will not be accepted.
Also, in my case, it was utun2.
Closing this due to inactivity.
@townsend2010 from what I can tell, the lack of inactivity was from your end, not ours!
Hey @diddledani,
Well, what I mean is that we (Multipass) has no control over modifying VPN settings in order to make this work. If it's better to leave this open so others can discover ways to work around this, I'm fine with that. We've documented all we can in order for folks to deal with this.
I thought that my contributions indicated that this isn't specific to VPN usage, as from reading them I explicitly stated:
don't have any VPN or Little Snitch set up, and the macOS firewall is disabled - the default state from macOS installation-time.
Also, the OP, @evandandrea indicated of their system:
No VPN, no fancy networking.
@diddledani,
Ok, this particular bug has really lost context and is all over the place on people saying things helped them while other things didn't.
Considering your last contribution on this bug thread was from over three years ago, are you saying that you still have this issue today?
I will reopen this, but I think a more focused bug should be open if anyone in this thread has this problem.
TBH I haven't looked at it in a while, so I don't know, because I was waiting for input from the powers as to what to try... I've also replaced the system I was using at the time, so I don't have the same setup any more. plus, my new mac is m1 based so the architecture might be a differentiating factor should it be working ootb for me now...
I'll close this after all in favour of this issue, the only currently known problem when running on macOS: #2387
We're struggling to find a reliable way to reproduce there (or what fixes it, for that matter), so if you encounter this, please report back there.
thanks, I'll try to see if an ootb install on my m1 mac fails when I get some time.
I've been stuck to this issue for a long time. What i've noticed when i was connected to the VPN, sometimes the TLS handshake got a timeout on some specific sites. After researching for hours i've found a solution.
First i've tried adding nat on utun0 from bridge100:network to any -> (utun0)
to /etc/pf.conf
but it made no difference, that was because I am using a IKEv2 (IPSec) VPN so my interface is not called utun0
but ipsec0
. So i added to /etc/pf.conf
: nat on ipsec0 from bridge100:network to any -> (ipsec0)
but still got the TLS handshake timeout.
Then i saw some articles on the internet about the MTU of a network interface and checked my ipsec0
MTU using (ifconfig | grep mtu
) and noticed my ipsec0
interface has a MTU of 1280
but my ubuntu enp0s1
interface has a MTU of 1500
. So i changed the enp0s1
MTU to 1280
(sudo ip link set dev enp0s1 mtu 1280
) and jep, that was the problem. Now the TLS handshake connections are working.
I got here from this guide after encountering a hang during TLS negotiation. The earlier proposed fix - nat on utun1 from bridge100:network to any -> (utun1)
in /etc/pf.conf
- didn't work:
no IP address found for bridge100:network
/etc/pf.conf:28: could not parse host specification
pfctl: Syntax error in config file: pf rules not loaded
(This is on Mac, which might be the reason for the differing syntax? Though, given that this solution was linked from a specifically Mac-centric issue, I might be misunderstanding something...)
Then, following @SherinBloemendaal 's advice, I noted that my VPN interface's (utun3
) MTU is 1420
but my Multipass interface (bridge101
) has 1500
. I tried changing the Multipass interface's MTU to match the VPN, but that failed:
$ sudo ip link set dev bridge101 mtu 1420
sudo: ip: command not found
Again, probably a Mac-specific issue. I tried two apparent ways of setting MTU on a Mac, neither of which worked:
$ sudo ifconfig bridge101 mtu 1420
ifconfig: ioctl (set mtu): Invalid argument
$ sudo networksetup -setMTU bridge101 1420
Could not find hardware port or device named bridge101.
** Error: The parameters were not valid.
I could try instead changing the VPN's MTU to match the Multipass interface (1500
seems a much more common MTU value), but I'm currently away from home and I don't want to risk breaking my VPN while I can't fix it.
EDIT: I set up a second VPN to my home network, used that to update my first VPN to have 1500
MTU, and now everything works as expected. Hooray for redundancy! :)
Then i saw some articles on the internet about the MTU of a network interface and checked my ipsec0 MTU using (ifconfig | grep mtu) and noticed my ipsec0 interface has a MTU of 1280 but my ubuntu enp0s1 interface has a MTU of 1500. So i changed the enp0s1 MTU to 1280 (sudo ip link set dev enp0s1 mtu 1280) and jep, that was the problem. Now the TLS handshake connections are working.
Yep, same thing for me.
I found these resources helpful: https://blog.cloudflare.com/path-mtu-discovery-in-practice/
I found these commands helpful to troubleshoot and identify the bounds for the MTU:
ping -M do -s 1400 github.com # not working
ping -M do -s 1320 github.com # working
ping -M do -s 1200 github.com # working
It had to be a little bit lower than the MTU on the host -- I'm wondering if it was related to how my VPN was encrypting traffic.
Network packets sent over a VPN tunnel are encrypted and then encapsulated in an outer packet so that they can be routed. Cloud VPN tunnels use IPsec and ESP for encryption and encapsulation. Because the encapsulated inner packet must itself fit within the MTU of the outer packet, its MTU must be smaller.
Since I'm trying to distribute this VM to others, I went with the following fix instead:
echo 1 > /proc/sys/net/ipv4/tcp_mtu_probing
echo 1024 > /proc/sys/net/ipv4/tcp_base_mss
Per cloudflare, this is doing smart MTU probing to detect the appropriate MTU for the network interface.
Detailed here: https://gist.github.com/evandandrea/15d17529679ca0fd983dcb48d3f3198c