containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
22.97k stars 2.34k forks source link

Support native source folder for volume mount in remote model #8016

Closed jeffwubj closed 2 years ago

jeffwubj commented 3 years ago

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind feature

Description

Currently, looks like in podman's remote client case, bind volume will use source folders on podman server's host, could we support (or give some configurations to choose) using source folders on podman client's host?

podman client and podman server may run in different hosts, or one runs in the host, the other runs in a VM. Have the ability to mount folders on the client side might be able to open more use cases...

afbjorklund commented 3 years ago

I think this also happens, when using docker to connect to a remote server (rather than using the Docker Desktop product) ? In Docker Toolbox (docker-machine), they worked around this issue by sharing the entire home folder (!) using vboxsf...

You should be able to create network volumes using podman volume, but I am a little uncertain about the syntax to use. It takes a --driver parameter, which only talks about the default "local" driver that uses regular directories on the host.

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

rhatdan commented 3 years ago

I am not sure how we would make this work, other then to export the current directory to the server running podman via nfs, samba or some other network protocol and then volume mount that directory into the container.

I am not sure is sshfs has this type ability, to share your homedir with a remote machine?

afbjorklund commented 3 years ago

I am not sure is sshfs has this type ability, to share your homedir with a remote machine?

The way we used sshfs in docker-machine, was to share the remote machine with a local mount.

https://docs.docker.com/machine/reference/mount/

That is, the mount was going in the other direction. Used sftp on the remote and fuse on the local.

Other than that, it was just VirtualBox shared folders...

nlfiedler commented 3 years ago

May I suggest that the issue is that podman is validating the mount points on the local side, rather than on the remote side? I'm running on macOS and deploying to a Linux host, on which the mount points exist. Maybe if podman client were to not validate the paths when connecting to a remote service, then it would just work?

mheon commented 3 years ago

That sounds like a separate issue? This one is about emulating a Docker behavior where folders from an OS X host are able to be mounted into containers on the Linux VM running containers. What you're describing is definitely separate (but definitely a bug, and I encourage you to file an issue about it).

On Tue, Nov 24, 2020 at 9:57 PM Nathan Fiedler notifications@github.com wrote:

May I suggest that the issue is that podman is validating the mount points on the local side, rather than on the remote side? I'm running on macOS and deploying to a Linux host, on which the mount points exist. Maybe if podman client were to not validate the paths when connecting to a remote service, then it would just work?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/containers/podman/issues/8016#issuecomment-733430786, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB3AOCCW773SGH72W3WGX23SRRXBVANCNFSM4SQV632Q .

nlfiedler commented 3 years ago

Ah, you're right, I was seeing what I wanted to see. I've filed another issue, thanks.

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

rhatdan commented 3 years ago

This issue is not stale, but no one is currently working on it.

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

rhatdan commented 3 years ago

@baude This would be your stretch goal for Podman on a MAC.

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

rhatdan commented 3 years ago

Podman machine work is on going, but this is still a stretch goal. Would need someone from community to start working on it.

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

dustymabe commented 3 years ago

I do have some experience with sharing folders via SSHFS from my work on https://github.com/dustymabe/vagrant-sshfs. Would be willing to discuss strategy if anyone has questions or ideas and want to run it by me.

rhatdan commented 3 years ago

Sure a PR would be great. :^)

github-actions[bot] commented 3 years ago

A friendly reminder that this issue had no activity for 30 days.

gofabian commented 3 years ago

limavm has something like that. They use "reverse sshfs" and call it "Automatic file sharing". Maybe that helps.

https://github.com/lima-vm/lima

zhouhaibing089 commented 3 years ago

According to https://github.com/boot2podman/machine#accessing-files - It seems that it is possible already. Looks like it is about porting podman-machine mount as podman machine mount?

zhouhaibing089 commented 3 years ago

I was trying some manual command like below:

$ sshfs -o IdentityFile=${HOME}/.ssh/podman-machine-default -p 50186 core@localhost:/home/core/podman /Users/${USER}/podman

Note, the default VM has a read-only file-system on /, so you can't create a similar directory structure in the VM (in order to replicate what Docker Desktop does):

$ podman machine ssh -- sudo mkdir -p /Users/<username> # Operation not permitted
jimbali commented 3 years ago

This is an absolute necessity if Podman Machine is ever to become a usable replacement for Docker for Mac, which I hope it will be. Unfortunately I have no idea how to help to make it happen. 😣

gregorsoll commented 3 years ago

I'm really wondering why this is a strech goal. docker is changing their license on end of this year. currenlty there is no usable replacement for docker on mac! podman build is not working ..... podman -v ( bind a local volume ) is not working so ... i can't build anything .... just go back to docker ... that's sad i'm really hoping to find a replacement for docker

Jean-Daniel commented 3 years ago

podman build is not working .....

podman build works at least as well as docker build as both commands are just client that defer building to a linux machines. As long as you have a properly setup VM, you can use podman build as a docker build replacement.

I'm actually building most of my containers using podman build on macOS.

What Docker Desktop provides is a transparent and easy way to setup a linux VM, which is what podman machine should provides too.

afbjorklund commented 3 years ago

@zhouhaibing089

Looks like it is about porting podman-machine mount as podman machine mount?

Probably obvious, but "podman-machine mount" was copied from "docker-machine mount"

So you can find some better documentation, as well as the original PR and discussion, there:

https://github.com/docker/machine

Somewhere around 2017, in time ?

shanemcd commented 3 years ago

I was hoping to port the AWX development environment away from docker-compose to podman play kube, and ran into this issue. This is definitely a blocker for us, as most of our UI team uses Macs.

pkmoore commented 3 years ago

I'm also looking to move away from Docker on Mac. Getting my image and such built works great. I just need some way to reach a local directory from within my container. What is the preferred workaround until something more official is released?

afbjorklund commented 3 years ago

What is the preferred workaround until something more official is released?

Possibly https://virtio-fs.gitlab.io (QEMU 5.0)

CoreOS config:

CONFIG_VIRTIO=y
CONFIG_VIRTIO_FS=m
CONFIG_DAX=y
CONFIG_FS_DAX=y
CONFIG_DAX_DRIVER=y
CONFIG_ZONE_DEVICE=y

https://virtio-fs.gitlab.io/howto-qemu.html

Or the legacy variant, with the 9p network protocol:

CoreOS config:

CONFIG_NET_9P=m
CONFIG_NET_9P_VIRTIO=m
CONFIG_9P_FS=m
CONFIG_9P_FS_POSIX_ACL=y
CONFIG_PCI=y
CONFIG_VIRTIO_PCI=y

https://wiki.qemu.org/Documentation/9psetup


Something like:

$ mkdir /tmp/9p
$ touch /tmp/9p/foo
$ touch /tmp/9p/bar
$ ./bin/podman machine ssh
Fedora CoreOS 34.20210821.1.1
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/c/server/coreos/

[core@localhost ~]$ sudo mount -t 9p -o trans=virtio podman /mnt -oversion=9p2000.L
[core@localhost ~]$ ls /mnt/
bar  foo
[core@localhost ~]$ exit
logout
Connection to localhost closed.
$ ./bin/podman-remote run -v /mnt:/tmp busybox ls /tmp
bar
foo

With options:

--- a/pkg/machine/qemu/machine.go
+++ b/pkg/machine/qemu/machine.go
@@ -164,6 +164,9 @@ func (v *MachineVM) Init(opts machine.InitOptions) error {
        // Add arch specific options including image location
        v.CmdLine = append(v.CmdLine, v.addArchOptions()...)

+       add9pOptions := []string{ "-virtfs", "local,path=/tmp/9p,mount_tag=podman,security_model=mapped-xattr"}
+       v.CmdLine = append(v.CmdLine, add9pOptions...)
+
        // Add location of bootable image
        v.CmdLine = append(v.CmdLine, "-drive", "if=virtio,file="+v.ImagePath)
        // This kind of stinks but no other way around this r/n

if the -virtfs shorthand form is used then "virtio-9p-pci" is implied.

Seems to need some systemd workarounds to start at boot: https://bugzilla.redhat.com/show_bug.cgi?id=1184122#c1

But not sure how that works with CoreOS and Ignition, so leaving it to someone else. It needs to be done in start...

afbjorklund commented 3 years ago

There is some work ongoing in https://github.com/lima-vm/lima/issues/20

VirtioFS. Looking very cool, seems to have really good performance, but works only on Linux hosts. It is very optimised for using in virtual machines, it even uses DAX (direct access) for files, so there's no need to copy files over network, they're just in the shared RAM between VM and host.

VirtFS (9P). I've tried to use it, but it's incredible slow. Really. Using just git status in shared directory with middle size project takes at least half a minute. I would rather just place files in VM and access them via some remote file access protocol and use vscode with remote access (sad, but they're proprietary).

Currently they seem to be back on SMB (not even NFS)


This is why I preferred the sshfs approach, the files would stay on the actual Linux host and be mounted on the Mac (or Win)

Instead of "pretending" that the local files are magically available on the remote OS, and then get complaints that they aren't...

I have no idea how it works in WSL (2), but it seems* to be getting the same complaints (also much worse than WSL 1 was):

* https://vxlabs.com/2019/12/06/wsl2-io-measurements/ - pay no attention to the man (virtual machine?) behind the curtain

anthr76 commented 3 years ago

Somewhat relevant/helpful mailing list thread https://www.mail-archive.com/virtio-fs@redhat.com/msg02987.html

afbjorklund commented 3 years ago

Note, the default VM has a read-only file-system on /, so you can't create a similar directory structure in the VM (in order to replicate what Docker Desktop does):

Some directories, like /mnt are symlinked over to /var which makes them read-write

[core@localhost ~]$ df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda4       9.5G  1.6G  8.0G  17% /var

But this rules out the other locations such as /hosthome, /Users and /c/Users.

afbjorklund commented 2 years ago

There is no rule that says that the mount location on the VM has to be the exact same as on the host, though...

So one could mount /Users under /mnt/Users, and then "translate" -v /Users:/Users into -v /mnt/Users:/Users

Then it would work even if / is write-protected (ro).

The bind-mount prefix needs to be automatic, though.


But there should probably be some kind of toggle for it, sometimes you do want a local VM mount (for e.g. performance)

I think Docker has some kind of hard-coded list, on what is local and what is remote (guessing from osxfs docs)

By default, you can share files in /Users/, /Volumes/, /private/, and /tmp directly. To add or remove directory trees that are exported to Docker, use the File sharing tab in Docker preferences whale menu -> Preferences -> File sharing. (See Preferences.)

All other paths used in -v bind mounts are sourced from the Moby Linux VM running the Docker containers, so arguments such as -v /var/run/docker.sock:/var/run/docker.sock should work as expected. If a macOS path is not shared and does not exist in the VM, an attempt to bind mount it fails rather than create it in the VM. Paths that already exist in the VM and contain files are reserved by Docker and cannot be exported from macOS.

Note: osxfs has been replaced by another remote filesystem

JayDoubleu commented 2 years ago

I believe the ability to 1-to-1 mount might be a deal breaker for some.

Is the option to use ignition https://github.com/containers/podman/blob/b07e735661ccebb529d2719516809ce602fd56da/pkg/machine/ignition.go#L209 to create something like /var/podman-desktop/darwin and then create link from /Users to it a viable option?

This way 9p and virtiofs or even sshfs could mount to /var/podman-desktop/darwin/Users

/var is read-write on coreos by default.

afbjorklund commented 2 years ago

The translation needs to be done by podman automatically, when running in read-only VM. (assuming there is no way to change it, if there is then the translation might not be needed)

There might be a problem with /home, that was why it was using /hosthome before.

But in reality I think they could still co-exist, unless your user name happens to be "core"

Otherwise it would break the existing applications, that assume Docker Desktop for the -v. Unfortunately this also causes a major performance issue, due being based on an illusion...

Pay no attention to that man behind the curtain!

The Great Oz has spoken!

afbjorklund commented 2 years ago

If it is too much work to support virtfs on darwin, then one could start the file server some other way.

Either through reverse sshfs, or by still using 9p (like minikube) or even nfs (like docker-machine-nfs)

JayDoubleu commented 2 years ago

Simple oneshot systemd unit with ignition to Exec chattr -i / should make the coreos permanently Read-Write. If coreos guys would be happy about this violation of health and safety code I'm not sure :)

sed -i 's/\/usr\/bin\/echo Ready/\/usr\/bin\/chattr -i \/ \&\& \/usr\/bin\/echo Ready/g' pkg/machine/ignition.go ?

A layered deployment of RPM package which would create those directories could also be a solution I guess, at least it would be reversible with rollback.

rhatdan commented 2 years ago

I would not want that, since we want to be able to update the coreos VM over time, with new CoreOS Builds. Not sure we should be changing core features.

@dustymabe @cgwalters Thoughts.

cgwalters commented 2 years ago

See https://github.com/coreos/rpm-ostree/issues/337

cdoern commented 2 years ago

does this issue relate to containers/podman#11423 ? a related issue was closed and linked to this one instead.

mheon commented 2 years ago

Yes, it does. I'd like to hold off merging PRs related to this until we can have an architecture discussion at the next cabal, though.

We have a long-term goal of proper integration into the client (podman run -v /tmp:/tmp from an OS X host should mount the host's /tmp into the container automatically, virtual machine or no). However, this is going to take significant time to get wired in, so I think we need a solution to hold us until that can be written; the exact details of that, and what exactly we should use as a backend, are very much up for discussion.

afbjorklund commented 2 years ago

When using mount/sshfs, there are no hidden mounts so that /tmp would always be on the VM. You use a special path on the host instead...

But you can use something like sshocker/reverse sshfs, if you want the same model that Docker Desktop uses (I call it "Wizard of Oz mode")


EDIT: Looks like systemd puts some runfiles into /tmp/systemd-private*, a bit odd to send those off to the network

https://www.freedesktop.org/software/systemd/man/systemd-tmpfiles.html

/tmp/systemd-private-162abf88fb8f44c38227f2fd68a65d45-chronyd.service-EMV4Zd
/tmp/systemd-private-162abf88fb8f44c38227f2fd68a65d45-dbus-broker.service-XRQl3N
/tmp/systemd-private-162abf88fb8f44c38227f2fd68a65d45-systemd-logind.service-X7xmtW
/tmp/systemd-private-162abf88fb8f44c38227f2fd68a65d45-systemd-resolved.service-HvSFIa

Maybe it needs a setting like in NFS, where you have one local /tmp and /home and one shared /tmp and /home ?

/home/core
/tmp/systemd-private-*

/mnt/home/anders
/mnt/tmp/whatever
afbjorklund commented 2 years ago

I was able to verify that virtfs works on darwin, once qemu has been patched to support it (just like with hvf...)

Personally I think it's a better option than sshfs, but you might also want to wait for virtio-fs (vhost-user device)

JayDoubleu commented 2 years ago

virtio-fs on darwin is a big deal, AFAIK it's the only thing stoping kata-runtime from working on OSX. I've been running it on KVM for some time and it beats everything else performance wise. Question is.. is it going to be stable on darwin anytime soon.

afbjorklund commented 2 years ago

Question is.. is it going to be stable on darwin anytime soon.

It doesn't seem like gvproxy and virtio-fs are very stable on linux either.

But I remember there was a lot of talk about virtio/vsock, like 4 years ago ?

https://github.com/dhiltgen/docker-machine-kvm/issues/2#issuecomment-312418264

So we were just going to use 9p and sshfs "temporarily", while it got sorted...

JayDoubleu commented 2 years ago

Sounds like a reasonable approach, especially given that VM itself has its own overhead. Is performance of 9p or sshfs going to be really that noticeable?

afbjorklund commented 2 years ago

Is performance of 9p or sshfs going to be really that noticeable?

It's a big problem for Docker, but in the end it needs to be tested and benchmarked:

Those old tests were like 10 years ago, and then one can add virtio-fs and grpc etc ?

ols2010 https://en.wikipedia.org/wiki/9P_(protocol)#cite_note-8

For Machine it was not an issue since it never pretended that the remote was local... So when accessing the files over sshfs (i.e. sftp), it was just natural that it was slower.

One could scp the workspace to the remote, and then rsync the changes. Like Mutagen ?

The performance of the VirtualBox Shared Folders seemed "enough" for casual users. And it allowed running simple setups from your home folder, with some security issues.

JayDoubleu commented 2 years ago

I got curious so I ran some tests for some of the mentioned.

Looks like 9pshare is still a winner performance wise, SSHFS would probably got better results but it has to encrypt which is noticable in VM's CPU usage during test.

PS. Host speed might be slower because of different version of fio or the fact that the volume was mounted with kpartx ( same lvm volume was shared inside VM for comparison)

inside vm:
    read:  IOPS=67.7k, BW=264MiB/s (277MB/s)(616MiB/2329msec)
    write: IOPS=157k,  BW=615MiB/s (645MB/s)(1433MiB/2329msec)

host:
    read:  IOPS=53.1k, BW=207MiB/s (217MB/s)(616MiB/2969msec)
    write: IOPS=124k,  BW=482MiB/s (506MB/s)(1433MiB/2969msec)

virtiofs:
    read:  IOPS=30.1k, BW=117MiB/s (123MB/s)(616MiB/5239msec)
    write: IOPS=69.0k, BW=273MiB/s (287MB/s)(1433MiB/5239msec)

9pshare (default mount params):
    read:  IOPS=8544,  BW=33.4MiB/s (34.0MB/s)(616MiB/18440msec)
    write: IOPS=19.9k, BW=77.7MiB/s (81.5MB/s)(1433MiB/18440msec)

9pshare (msize=524288 and cache=none):
        read:  IOPS=8892,  BW=34.7MiB/s (36.4MB/s)(616MiB/17719msec)
        write: IOPS=20.7k, BW=80.8MiB/s (84.8MB/s)(1433MiB/17719msec)

smb:
    read:  IOPS=7853,  BW=30.7MiB/s (32.2MB/s)(616MiB/20063msec)
    write: IOPS=18.3k, BW=71.4MiB/s (74.9MB/s)(1433MiB/20063msec)

sshfs:
    read:  IOPS=4643,  BW=18.1MiB/s (19.0MB/s)(616MiB/33935msec)
    write: IOPS=10.8k, BW=42.2MiB/s (44.3MB/s)(1433MiB/33935msec)
Details fio args: `fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=bw_test --bs=4k --iodepth=64 --size=2G --readwrite=randrw --rwmixread=30` Inside VM: ``` bw_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 fio-3.16 Starting 1 process bw_test: Laying out IO file (1 file / 2048MiB) Jobs: 1 (f=1) bw_test: (groupid=0, jobs=1): err= 0: pid=262650: Tue Sep 14 20:28:26 2021 read: IOPS=67.7k, BW=264MiB/s (277MB/s)(616MiB/2329msec) bw ( KiB/s): min=258552, max=282520, per=100.00%, avg=272956.00, stdev=11090.21, samples=4 iops : min=64638, max=70630, avg=68239.00, stdev=2772.55, samples=4 write: IOPS=157k, BW=615MiB/s (645MB/s)(1433MiB/2329msec); 0 zone resets bw ( KiB/s): min=608640, max=655208, per=100.00%, avg=634446.00, stdev=21726.04, samples=4 iops : min=152160, max=163802, avg=158611.50, stdev=5431.51, samples=4 cpu : usr=14.39%, sys=65.16%, ctx=220895, majf=0, minf=8 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=157568,366720,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: bw=264MiB/s (277MB/s), 264MiB/s-264MiB/s (277MB/s-277MB/s), io=616MiB (645MB), run=2329-2329msec WRITE: bw=615MiB/s (645MB/s), 615MiB/s-615MiB/s (645MB/s-645MB/s), io=1433MiB (1502MB), run=2329-2329msec Disk stats (read/write): vda: ios=145439/338132, merge=0/0, ticks=15672/14812, in_queue=30484, util=96.33% ``` Host : ``` bw_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 fio-3.25 Starting 1 process Jobs: 1 (f=1): [m(1)][100.0%][r=206MiB/s,w=483MiB/s][r=52.7k,w=124k IOPS][eta 00m:00s] bw_test: (groupid=0, jobs=1): err= 0: pid=309852: Tue Sep 14 20:35:44 2021 read: IOPS=53.1k, BW=207MiB/s (217MB/s)(616MiB/2969msec) bw ( KiB/s): min=209524, max=213996, per=100.00%, avg=212659.00, stdev=1803.54, samples=5 iops : min=52381, max=53499, avg=53164.60, stdev=450.85, samples=5 write: IOPS=124k, BW=482MiB/s (506MB/s)(1433MiB/2969msec); 0 zone resets bw ( KiB/s): min=491201, max=498922, per=100.00%, avg=494088.20, stdev=2984.47, samples=5 iops : min=122800, max=124728, avg=123521.20, stdev=745.39, samples=5 cpu : usr=7.51%, sys=79.72%, ctx=182038, majf=0, minf=7 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=157568,366720,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: bw=207MiB/s (217MB/s), 207MiB/s-207MiB/s (217MB/s-217MB/s), io=616MiB (645MB), run=2969-2969msec WRITE: bw=482MiB/s (506MB/s), 482MiB/s-482MiB/s (506MB/s-506MB/s), io=1433MiB (1502MB), run=2969-2969msec Disk stats (read/write): dm-67: ios=153782/357766, merge=0/0, ticks=11708/5116, in_queue=16824, util=96.80%, aggrios=157568/366720, aggrmerge=0/0, aggrticks=11952/5112, aggrin_queue=17064, aggrutil=95.41% dm-66: ios=157568/366720, merge=0/0, ticks=11952/5112, in_queue=17064, util=95.41%, aggrios=157568/366753, aggrmerge=0/0, aggrticks=11772/4656, aggrin_queue=16428, aggrutil=95.41% dm-10: ios=157568/366753, merge=0/0, ticks=11772/4656, in_queue=16428, util=95.41%, aggrios=78784/183376, aggrmerge=0/0, aggrticks=5856/2246, aggrin_queue=8102, aggrutil=95.38% dm-8: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=157568/366753, aggrmerge=0/0, aggrticks=11608/4316, aggrin_queue=15924, aggrutil=95.67% md0: ios=157568/366753, merge=0/0, ticks=11608/4316, in_queue=15924, util=95.67%, aggrios=22509/52393, aggrmerge=0/0, aggrticks=1598/488, aggrin_queue=2086, aggrutil=95.41% nvme3n1: ios=22547/52206, merge=0/0, ticks=1548/418, in_queue=1968, util=95.41% nvme6n1: ios=22654/52361, merge=0/0, ticks=1668/594, in_queue=2261, util=95.41% nvme2n1: ios=22562/52193, merge=0/0, ticks=1546/426, in_queue=1972, util=95.38% nvme8n1: ios=22520/52494, merge=0/0, ticks=1598/578, in_queue=2177, util=95.38% nvme1n1: ios=22230/52660, merge=0/0, ticks=1569/410, in_queue=1980, util=95.38% nvme4n1: ios=22532/52349, merge=0/0, ticks=1592/422, in_queue=2014, util=95.38% nvme7n1: ios=22523/52490, merge=0/0, ticks=1665/569, in_queue=2234, util=95.38% dm-9: ios=157568/366753, merge=0/0, ticks=11712/4492, in_queue=16204, util=95.38% ``` QEMU's builtin SMB: ``` bw_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 fio-3.16 Starting 1 process Jobs: 1 (f=1): [m(1)][100.0%][r=29.4MiB/s,w=69.6MiB/s][r=7519,w=17.8k IOPS][eta 00m:00s] bw_test: (groupid=0, jobs=1): err= 0: pid=1942: Tue Sep 14 21:44:30 2021 read: IOPS=7853, BW=30.7MiB/s (32.2MB/s)(616MiB/20063msec) bw ( KiB/s): min=28544, max=33952, per=100.00%, avg=31474.78, stdev=1449.64, samples=40 iops : min= 7136, max= 8488, avg=7868.72, stdev=362.40, samples=40 write: IOPS=18.3k, BW=71.4MiB/s (74.9MB/s)(1433MiB/20063msec); 0 zone resets bw ( KiB/s): min=65904, max=77944, per=100.00%, avg=73255.38, stdev=2762.82, samples=40 iops : min=16474, max=19486, avg=18313.83, stdev=690.74, samples=40 cpu : usr=2.69%, sys=77.73%, ctx=139412, majf=0, minf=7 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=157568,366720,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=616MiB (645MB), run=20063-20063msec WRITE: bw=71.4MiB/s (74.9MB/s), 71.4MiB/s-71.4MiB/s (74.9MB/s-74.9MB/s), io=1433MiB (1502MB), run=20063-20063msec ``` SSHFS: ``` bw_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 fio-3.16 Starting 1 process Jobs: 1 (f=1): [m(1)][100.0%][r=17.8MiB/s,w=42.1MiB/s][r=4568,w=10.8k IOPS][eta 00m:00s] bw_test: (groupid=0, jobs=1): err= 0: pid=4102: Tue Sep 14 20:51:09 2021 read: IOPS=4643, BW=18.1MiB/s (19.0MB/s)(616MiB/33935msec) bw ( KiB/s): min=17688, max=20136, per=100.00%, avg=18572.78, stdev=394.46, samples=67 iops : min= 4422, max= 5034, avg=4643.16, stdev=98.62, samples=67 write: IOPS=10.8k, BW=42.2MiB/s (44.3MB/s)(1433MiB/33935msec); 0 zone resets bw ( KiB/s): min=40856, max=48160, per=99.95%, avg=43205.57, stdev=1256.94, samples=67 iops : min=10214, max=12040, avg=10801.37, stdev=314.21, samples=67 cpu : usr=3.03%, sys=19.31%, ctx=546233, majf=0, minf=8 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=157568,366720,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: bw=18.1MiB/s (19.0MB/s), 18.1MiB/s-18.1MiB/s (19.0MB/s-19.0MB/s), io=616MiB (645MB), run=33935-33935msec WRITE: bw=42.2MiB/s (44.3MB/s), 42.2MiB/s-42.2MiB/s (44.3MB/s-44.3MB/s), io=1433MiB (1502MB), run=33935-33935msec ``` virtiofs: ``` bw_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 fio-3.16 Starting 1 process Jobs: 1 (f=1): [m(1)][100.0%][r=109MiB/s,w=256MiB/s][r=27.9k,w=65.6k IOPS][eta 00m:00s] bw_test: (groupid=0, jobs=1): err= 0: pid=2389: Tue Sep 14 20:59:27 2021 read: IOPS=30.1k, BW=117MiB/s (123MB/s)(616MiB/5239msec) bw ( KiB/s): min=106816, max=127720, per=100.00%, avg=120716.00, stdev=7307.45, samples=10 iops : min=26704, max=31930, avg=30179.00, stdev=1826.86, samples=10 write: IOPS=69.0k, BW=273MiB/s (287MB/s)(1433MiB/5239msec); 0 zone resets bw ( KiB/s): min=252992, max=295584, per=100.00%, avg=280693.60, stdev=15187.13, samples=10 iops : min=63248, max=73896, avg=70173.40, stdev=3796.78, samples=10 cpu : usr=7.22%, sys=54.77%, ctx=326170, majf=0, minf=8 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=157568,366720,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=616MiB (645MB), run=5239-5239msec WRITE: bw=273MiB/s (287MB/s), 273MiB/s-273MiB/s (287MB/s-287MB/s), io=1433MiB (1502MB), run=5239-5239msec ``` 9pshare with default mount params: ``` bw_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 fio-3.16 Starting 1 process Jobs: 1 (f=1): [m(1)][100.0%][r=33.3MiB/s,w=78.2MiB/s][r=8535,w=20.0k IOPS][eta 00m:00s] bw_test: (groupid=0, jobs=1): err= 0: pid=2328: Tue Sep 14 21:02:28 2021 read: IOPS=8544, BW=33.4MiB/s (34.0MB/s)(616MiB/18440msec) bw ( KiB/s): min=33168, max=35176, per=100.00%, avg=34182.72, stdev=433.12, samples=36 iops : min= 8292, max= 8794, avg=8545.64, stdev=108.26, samples=36 write: IOPS=19.9k, BW=77.7MiB/s (81.5MB/s)(1433MiB/18440msec); 0 zone resets bw ( KiB/s): min=78312, max=80736, per=99.98%, avg=79532.94, stdev=547.34, samples=36 iops : min=19578, max=20184, avg=19883.22, stdev=136.84, samples=36 cpu : usr=3.05%, sys=14.45%, ctx=524698, majf=0, minf=7 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=157568,366720,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: bw=33.4MiB/s (34.0MB/s), 33.4MiB/s-33.4MiB/s (34.0MB/s-34.0MB/s), io=616MiB (645MB), run=18440-18440msec WRITE: bw=77.7MiB/s (81.5MB/s), 77.7MiB/s-77.7MiB/s (81.5MB/s-81.5MB/s), io=1433MiB (1502MB), run=18440-18440msec ``` 9pshare mounted with `msize=524288 and cache=none`: ``` bw_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 fio-3.16 Starting 1 process Jobs: 1 (f=1): [m(1)][100.0%][r=34.8MiB/s,w=81.5MiB/s][r=8916,w=20.9k IOPS][eta 00m:00s] bw_test: (groupid=0, jobs=1): err= 0: pid=5565: Tue Sep 14 22:30:28 2021 read: IOPS=8892, BW=34.7MiB/s (36.4MB/s)(616MiB/17719msec) bw ( KiB/s): min=34363, max=36648, per=100.00%, avg=35573.31, stdev=571.31, samples=35 iops : min= 8590, max= 9162, avg=8893.29, stdev=142.85, samples=35 write: IOPS=20.7k, BW=80.8MiB/s (84.8MB/s)(1433MiB/17719msec); 0 zone resets bw ( KiB/s): min=80808, max=84800, per=99.96%, avg=82751.69, stdev=992.31, samples=35 iops : min=20202, max=21200, avg=20687.89, stdev=248.09, samples=35 cpu : usr=2.75%, sys=15.62%, ctx=524678, majf=0, minf=7 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% issued rwts: total=157568,366720,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=616MiB (645MB), run=17719-17719msec WRITE: bw=80.8MiB/s (84.8MB/s), 80.8MiB/s-80.8MiB/s (84.8MB/s-84.8MB/s), io=1433MiB (1502MB), run=17719-17719msec ```
jeesmon commented 2 years ago

I'm using this work around for podman on Mac

Setup

podman machine init
podman machine start
podman machine --log-level=debug ssh -- exit 2>&1 | grep Executing
# copy ssh command from above output
# time="2021-09-15T09:02:09-04:00" level=debug msg="Executing: ssh [-i /Users/<user>/.ssh/podman-machine-default -p 49671 core@localhost -o UserKnownHostsFile /dev/null -o StrictHostKeyChecking no exit]\n"

ssh -i /Users/jjacob/.ssh/podman-machine-default -R 10000:$(hostname):22 -p 49671 core@localhost
ssh-keygen -t rsa
ssh-copy-id -p 10000 <user>@127.0.0.1
sudo mkdir -p /mnt/Users/<user>/Documents/workspace
sudo chown core:core /mnt/Users/<user>/Documents/workspace
sshfs -p 10000 <user>@127.0.0.1:/Users/<user>/Documents/workspace /mnt/Users/<user>/Documents/workspace
df -k | grep mnt
# leave terminal running

Run container with volume mount

cd /Users/<user>/Documents/workspace/nginx
podman run -d --rm --name nginx -v /mnt$(pwd):/source nginx:latest
podman exec -it nginx sh
cd /source
ls -l
exit
podman stop nginx
afbjorklund commented 2 years ago

SSHFS would probably got better results but it has to encrypt which is noticable in VM's CPU usage during test

You might be able to tweak this, by picking another algorithm/crypto.

JayDoubleu commented 2 years ago

SSHFS would probably got better results but it has to encrypt which is noticable in VM's CPU usage during test

You might be able to tweak this, by picking another algorithm/crypto.

True, however usually the fastest SSHFS algorithms are the weak ones no longer supported by the host. I would rather not compromise security of sshd for the sake of fast mounts.

I think for podman-desktop there are three possibly viable options here if I understand correctly.

In theory both SMB and SSHFS could be implemented and default to single one at first and give user the option to switch when running podman machine init ? virtfs-9p is not bad, WSL is using it. I guess there's only question about it's stability and performance on Mac