Open jrajahalme opened 1 year ago
Hi @jrajahalme!
Thanks for the report on this! We won't be addressing this issue for the 1.11 release as this feature is still experimental. That said, we will investigate this further and hopefully have it fixed for the 1.12 release.
Not sure, but this may be relevant: https://github.com/qemu/qemu/commit/f5265c8f917ea8c71a30e549b7e3017c1038db63
Using bind mounts in Docker to Multipass native mounts directories is definitely x10 slower than should be in its worst case. Disabling native mounts gives ~ x10 speed improvement.
% multipass --version multipass 1.12.0+mac multipassd 1.12.0+mac
Hi @m-emelchenkov!
Would you mind sharing a basic setup that shows this poor behavior so we can have another reproduction case in order to chase down this issue? Thanks!
Would you mind sharing a basic setup that shows this poor behavior so we can have another reproduction case in order to chase down this issue? Thanks!
Thank you! Sure! I would like to show you my setup (straightforward setup script and few manual commands), but I don't want to share it on public. Could you please give me your email to send it to you? Or, if you don't want to share email, please mail me at m [at] emelchenkov [dot] pro and I'll reply back.
Hi,
We are also experiencing the same performance issue between classic and native mounts. We've managed to simplify the test. Although the difference is not as huge as with our app, it still shows in a way easily reproducible that performance are worse with native mount.
To Reproduce
multipass launch 20.04 --name perf-classic --mem 1G --disk 5G --cpus 1
multipass launch 20.04 --name perf-native --mem 1G --disk 5G --cpus 1
multipass mount -u 501:1000 -g 20:1000 ~/classic perf-classic:/home/ubuntu/mount
multipass mount -u 501:1000 -g 20:1000 -t native ~/native perf-native:/home/ubuntu/mount
sudo apt-get install iozone3
iozone -t1 -i0 -i2 -r1k -s1g /home/ubuntu/mount/
Expected behavior Performance should be better in native mode than in classic mode.
Additional info
multipass 1.14.0+mac
multipassd 1.14.0+mac
multipass get local.driver
: qemuAdditional context We tried different ways to show the performance problem (fio, php script dedicated to test i/o...) but only iozone revealed the problem as we see with our app.
Thanks for reporting @petitj, we need to bump this.
Describe the bug 1.11 RC introduces support for native mounts. Testing this out compiling a large Go project shows that the native mounts are very slow, and seem to be leaking files as
Too many open files
errors are seen in the end.To Reproduce I did not try to simplify repro steps as the errors look like relating to leaking files, which likely would not be seen on a small project. To reproduce:
cilium/packer-ci-build
onto directory named 'cilium', where you also have 'cilium' main repo. So after cloning you'll have:cilium/cilium
(cloned from https://github.com/cilium/cilium.git)cilium/packer-ci-build
(cloned from https://github.com/cilium/packer-ci-build.git)cilium/packer-ci-build
and check out the multipass 1.11 branch:dev
using anative
mount (NFS
is the default mount type):This takes ~50 minutes and produces errors like this:
Expected behavior Be faster than
fuse.sshfs
orNFS
mounts and not produce errors.For reference:
1.11 RC with NFS mount (
MOUNT=NFS
on front ofmake multipass
above):1.11 RC with fuse.sshfs mount (
MOUNT=default
on front ofmake multipass
above):1.11 RC with native mount (
MOUNT=native
on front ofmake multipass
above):Logs multipassd.log
Additional info
multipass version
:multipass info --all
:multipass get local.driver
:Additional context Shorter test that does not bring up the
Too many open files
error, but is indicative of the performance, is to run themake build
command in the Cilium builder docker image.Docker Desktop 4.15.0 with VirtioFS file sharing:
Multipass 1.11 RC with NFS mount:
Multipass 1.11 RC wtih native mount: