Closed jeffwubj closed 2 years ago
I think this also happens, when using docker to connect to a remote server (rather than using the Docker Desktop product) ?
In Docker Toolbox (docker-machine
), they worked around this issue by sharing the entire home folder (!) using vboxsf
...
You should be able to create network volumes using podman volume
, but I am a little uncertain about the syntax to use.
It takes a --driver
parameter, which only talks about the default "local"
driver that uses regular directories on the host.
A friendly reminder that this issue had no activity for 30 days.
I am not sure how we would make this work, other then to export the current directory to the server running podman via nfs, samba or some other network protocol and then volume mount that directory into the container.
I am not sure is sshfs has this type ability, to share your homedir with a remote machine?
I am not sure is sshfs has this type ability, to share your homedir with a remote machine?
The way we used sshfs in docker-machine, was to share the remote machine with a local mount.
https://docs.docker.com/machine/reference/mount/
That is, the mount was going in the other direction. Used sftp on the remote and fuse on the local.
Other than that, it was just VirtualBox shared folders...
May I suggest that the issue is that podman is validating the mount points on the local side, rather than on the remote side? I'm running on macOS and deploying to a Linux host, on which the mount points exist. Maybe if podman client were to not validate the paths when connecting to a remote service, then it would just work?
That sounds like a separate issue? This one is about emulating a Docker behavior where folders from an OS X host are able to be mounted into containers on the Linux VM running containers. What you're describing is definitely separate (but definitely a bug, and I encourage you to file an issue about it).
On Tue, Nov 24, 2020 at 9:57 PM Nathan Fiedler notifications@github.com wrote:
May I suggest that the issue is that podman is validating the mount points on the local side, rather than on the remote side? I'm running on macOS and deploying to a Linux host, on which the mount points exist. Maybe if podman client were to not validate the paths when connecting to a remote service, then it would just work?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/containers/podman/issues/8016#issuecomment-733430786, or unsubscribe https://github.com/notifications/unsubscribe-auth/AB3AOCCW773SGH72W3WGX23SRRXBVANCNFSM4SQV632Q .
Ah, you're right, I was seeing what I wanted to see. I've filed another issue, thanks.
A friendly reminder that this issue had no activity for 30 days.
This issue is not stale, but no one is currently working on it.
A friendly reminder that this issue had no activity for 30 days.
@baude This would be your stretch goal for Podman on a MAC.
A friendly reminder that this issue had no activity for 30 days.
A friendly reminder that this issue had no activity for 30 days.
Podman machine work is on going, but this is still a stretch goal. Would need someone from community to start working on it.
A friendly reminder that this issue had no activity for 30 days.
I do have some experience with sharing folders via SSHFS from my work on https://github.com/dustymabe/vagrant-sshfs. Would be willing to discuss strategy if anyone has questions or ideas and want to run it by me.
Sure a PR would be great. :^)
A friendly reminder that this issue had no activity for 30 days.
limavm has something like that. They use "reverse sshfs" and call it "Automatic file sharing". Maybe that helps.
According to https://github.com/boot2podman/machine#accessing-files - It seems that it is possible already. Looks like it is about porting podman-machine mount
as podman machine mount
?
I was trying some manual command like below:
$ sshfs -o IdentityFile=${HOME}/.ssh/podman-machine-default -p 50186 core@localhost:/home/core/podman /Users/${USER}/podman
Note, the default VM has a read-only file-system on /
, so you can't create a similar directory structure in the VM (in order to replicate what Docker Desktop does):
$ podman machine ssh -- sudo mkdir -p /Users/<username> # Operation not permitted
This is an absolute necessity if Podman Machine is ever to become a usable replacement for Docker for Mac, which I hope it will be. Unfortunately I have no idea how to help to make it happen. 😣
I'm really wondering why this is a strech goal. docker is changing their license on end of this year. currenlty there is no usable replacement for docker on mac! podman build is not working ..... podman -v ( bind a local volume ) is not working so ... i can't build anything .... just go back to docker ... that's sad i'm really hoping to find a replacement for docker
podman build is not working .....
podman build
works at least as well as docker build
as both commands are just client that defer building to a linux machines.
As long as you have a properly setup VM, you can use podman build as a docker build replacement.
I'm actually building most of my containers using podman build on macOS.
What Docker Desktop provides is a transparent and easy way to setup a linux VM, which is what podman machine
should provides too.
@zhouhaibing089
Looks like it is about porting
podman-machine mount
aspodman machine mount
?
Probably obvious, but "podman-machine mount" was copied from "docker-machine mount"
So you can find some better documentation, as well as the original PR and discussion, there:
https://github.com/docker/machine
Somewhere around 2017, in time ?
I was hoping to port the AWX development environment away from docker-compose
to podman play kube
, and ran into this issue. This is definitely a blocker for us, as most of our UI team uses Macs.
I'm also looking to move away from Docker on Mac. Getting my image and such built works great. I just need some way to reach a local directory from within my container. What is the preferred workaround until something more official is released?
What is the preferred workaround until something more official is released?
Possibly https://virtio-fs.gitlab.io (QEMU 5.0)
CoreOS config:
CONFIG_VIRTIO=y
CONFIG_VIRTIO_FS=m
CONFIG_DAX=y
CONFIG_FS_DAX=y
CONFIG_DAX_DRIVER=y
CONFIG_ZONE_DEVICE=y
https://virtio-fs.gitlab.io/howto-qemu.html
Or the legacy variant, with the 9p network protocol:
CoreOS config:
CONFIG_NET_9P=m
CONFIG_NET_9P_VIRTIO=m
CONFIG_9P_FS=m
CONFIG_9P_FS_POSIX_ACL=y
CONFIG_PCI=y
CONFIG_VIRTIO_PCI=y
https://wiki.qemu.org/Documentation/9psetup
Something like:
$ mkdir /tmp/9p
$ touch /tmp/9p/foo
$ touch /tmp/9p/bar
$ ./bin/podman machine ssh
Fedora CoreOS 34.20210821.1.1
Tracker: https://github.com/coreos/fedora-coreos-tracker
Discuss: https://discussion.fedoraproject.org/c/server/coreos/
[core@localhost ~]$ sudo mount -t 9p -o trans=virtio podman /mnt -oversion=9p2000.L
[core@localhost ~]$ ls /mnt/
bar foo
[core@localhost ~]$ exit
logout
Connection to localhost closed.
$ ./bin/podman-remote run -v /mnt:/tmp busybox ls /tmp
bar
foo
With options:
--- a/pkg/machine/qemu/machine.go
+++ b/pkg/machine/qemu/machine.go
@@ -164,6 +164,9 @@ func (v *MachineVM) Init(opts machine.InitOptions) error {
// Add arch specific options including image location
v.CmdLine = append(v.CmdLine, v.addArchOptions()...)
+ add9pOptions := []string{ "-virtfs", "local,path=/tmp/9p,mount_tag=podman,security_model=mapped-xattr"}
+ v.CmdLine = append(v.CmdLine, add9pOptions...)
+
// Add location of bootable image
v.CmdLine = append(v.CmdLine, "-drive", "if=virtio,file="+v.ImagePath)
// This kind of stinks but no other way around this r/n
if the -virtfs shorthand form is used then "virtio-9p-pci" is implied.
Seems to need some systemd workarounds to start at boot: https://bugzilla.redhat.com/show_bug.cgi?id=1184122#c1
But not sure how that works with CoreOS and Ignition, so leaving it to someone else. It needs to be done in start
...
There is some work ongoing in https://github.com/lima-vm/lima/issues/20
VirtioFS. Looking very cool, seems to have really good performance, but works only on Linux hosts. It is very optimised for using in virtual machines, it even uses DAX (direct access) for files, so there's no need to copy files over network, they're just in the shared RAM between VM and host.
VirtFS (9P). I've tried to use it, but it's incredible slow. Really. Using just git status in shared directory with middle size project takes at least half a minute. I would rather just place files in VM and access them via some remote file access protocol and use vscode with remote access (sad, but they're proprietary).
Currently they seem to be back on SMB (not even NFS)
This is why I preferred the sshfs approach, the files would stay on the actual Linux host and be mounted on the Mac (or Win)
Instead of "pretending" that the local files are magically available on the remote OS, and then get complaints that they aren't...
I have no idea how it works in WSL (2), but it seems* to be getting the same complaints (also much worse than WSL 1 was):
* https://vxlabs.com/2019/12/06/wsl2-io-measurements/ - pay no attention to the man (virtual machine?) behind the curtain
Somewhat relevant/helpful mailing list thread https://www.mail-archive.com/virtio-fs@redhat.com/msg02987.html
Note, the default VM has a read-only file-system on
/
, so you can't create a similar directory structure in the VM (in order to replicate what Docker Desktop does):
Some directories, like /mnt
are symlinked over to /var
which makes them read-write
[core@localhost ~]$ df -h /mnt
Filesystem Size Used Avail Use% Mounted on
/dev/vda4 9.5G 1.6G 8.0G 17% /var
But this rules out the other locations such as /hosthome
, /Users
and /c/Users
.
There is no rule that says that the mount location on the VM has to be the exact same as on the host, though...
So one could mount /Users
under /mnt/Users
, and then "translate" -v /Users:/Users
into -v /mnt/Users:/Users
Then it would work even if /
is write-protected (ro).
The bind-mount prefix needs to be automatic, though.
But there should probably be some kind of toggle for it, sometimes you do want a local VM mount (for e.g. performance)
I think Docker has some kind of hard-coded list, on what is local and what is remote (guessing from osxfs docs)
By default, you can share files in
/Users/
,/Volumes/
,/private/
, and/tmp
directly. To add or remove directory trees that are exported to Docker, use the File sharing tab in Docker preferences whale menu -> Preferences -> File sharing. (See Preferences.)All other paths used in -v bind mounts are sourced from the Moby Linux VM running the Docker containers, so arguments such as
-v /var/run/docker.sock:/var/run/docker.sock
should work as expected. If a macOS path is not shared and does not exist in the VM, an attempt to bind mount it fails rather than create it in the VM. Paths that already exist in the VM and contain files are reserved by Docker and cannot be exported from macOS.
Note: osxfs has been replaced by another remote filesystem
I believe the ability to 1-to-1 mount might be a deal breaker for some.
Is the option to use ignition https://github.com/containers/podman/blob/b07e735661ccebb529d2719516809ce602fd56da/pkg/machine/ignition.go#L209 to create something like /var/podman-desktop/darwin
and then create link from /Users to it a viable option?
This way 9p and virtiofs or even sshfs could mount to /var/podman-desktop/darwin/Users
/var
is read-write on coreos by default.
The translation needs to be done by podman automatically, when running in read-only VM. (assuming there is no way to change it, if there is then the translation might not be needed)
There might be a problem with /home
, that was why it was using /hosthome
before.
But in reality I think they could still co-exist, unless your user name happens to be "core"
/home/anders
goes to the remote /mnt/home/anders
mountpoint
/home/core
goes to the local /home/core
directory
Otherwise it would break the existing applications, that assume Docker Desktop for the -v
.
Unfortunately this also causes a major performance issue, due being based on an illusion...
Pay no attention to that man behind the curtain!
The Great Oz has spoken!
If it is too much work to support virtfs on darwin, then one could start the file server some other way.
Either through reverse sshfs, or by still using 9p (like minikube) or even nfs (like docker-machine-nfs)
Simple oneshot systemd unit with ignition to Exec chattr -i /
should make the coreos permanently Read-Write.
If coreos guys would be happy about this violation of health and safety code I'm not sure :)
sed -i 's/\/usr\/bin\/echo Ready/\/usr\/bin\/chattr -i \/ \&\& \/usr\/bin\/echo Ready/g' pkg/machine/ignition.go
?
A layered deployment of RPM package which would create those directories could also be a solution I guess, at least it would be reversible with rollback.
I would not want that, since we want to be able to update the coreos VM over time, with new CoreOS Builds. Not sure we should be changing core features.
@dustymabe @cgwalters Thoughts.
does this issue relate to containers/podman#11423 ? a related issue was closed and linked to this one instead.
Yes, it does. I'd like to hold off merging PRs related to this until we can have an architecture discussion at the next cabal, though.
We have a long-term goal of proper integration into the client (podman run -v /tmp:/tmp
from an OS X host should mount the host's /tmp into the container automatically, virtual machine or no). However, this is going to take significant time to get wired in, so I think we need a solution to hold us until that can be written; the exact details of that, and what exactly we should use as a backend, are very much up for discussion.
When using mount/sshfs, there are no hidden mounts so that /tmp
would always be on the VM. You use a special path on the host instead...
But you can use something like sshocker/reverse sshfs, if you want the same model that Docker Desktop uses (I call it "Wizard of Oz mode")
EDIT: Looks like systemd puts some runfiles into /tmp/systemd-private*, a bit odd to send those off to the network
https://www.freedesktop.org/software/systemd/man/systemd-tmpfiles.html
/tmp/systemd-private-162abf88fb8f44c38227f2fd68a65d45-chronyd.service-EMV4Zd
/tmp/systemd-private-162abf88fb8f44c38227f2fd68a65d45-dbus-broker.service-XRQl3N
/tmp/systemd-private-162abf88fb8f44c38227f2fd68a65d45-systemd-logind.service-X7xmtW
/tmp/systemd-private-162abf88fb8f44c38227f2fd68a65d45-systemd-resolved.service-HvSFIa
Maybe it needs a setting like in NFS, where you have one local /tmp and /home and one shared /tmp and /home ?
/home/core
/tmp/systemd-private-*
/mnt/home/anders
/mnt/tmp/whatever
I was able to verify that virtfs works on darwin, once qemu has been patched to support it (just like with hvf...)
Personally I think it's a better option than sshfs, but you might also want to wait for virtio-fs (vhost-user device)
virtio-fs on darwin is a big deal, AFAIK it's the only thing stoping kata-runtime from working on OSX. I've been running it on KVM for some time and it beats everything else performance wise. Question is.. is it going to be stable on darwin anytime soon.
Question is.. is it going to be stable on darwin anytime soon.
It doesn't seem like gvproxy and virtio-fs are very stable on linux either.
But I remember there was a lot of talk about virtio/vsock, like 4 years ago ?
https://github.com/dhiltgen/docker-machine-kvm/issues/2#issuecomment-312418264
So we were just going to use 9p and sshfs "temporarily", while it got sorted...
Sounds like a reasonable approach, especially given that VM itself has its own overhead. Is performance of 9p or sshfs going to be really that noticeable?
Is performance of 9p or sshfs going to be really that noticeable?
It's a big problem for Docker, but in the end it needs to be tested and benchmarked:
Those old tests were like 10 years ago, and then one can add virtio-fs and grpc etc ?
https://en.wikipedia.org/wiki/9P_(protocol)#cite_note-8
For Machine it was not an issue since it never pretended that the remote was local... So when accessing the files over sshfs (i.e. sftp), it was just natural that it was slower.
One could scp the workspace to the remote, and then rsync the changes. Like Mutagen ?
The performance of the VirtualBox Shared Folders seemed "enough" for casual users. And it allowed running simple setups from your home folder, with some security issues.
I got curious so I ran some tests for some of the mentioned.
Looks like 9pshare is still a winner performance wise, SSHFS would probably got better results but it has to encrypt which is noticable in VM's CPU usage during test.
PS. Host speed might be slower because of different version of fio or the fact that the volume was mounted with kpartx ( same lvm volume was shared inside VM for comparison)
inside vm:
read: IOPS=67.7k, BW=264MiB/s (277MB/s)(616MiB/2329msec)
write: IOPS=157k, BW=615MiB/s (645MB/s)(1433MiB/2329msec)
host:
read: IOPS=53.1k, BW=207MiB/s (217MB/s)(616MiB/2969msec)
write: IOPS=124k, BW=482MiB/s (506MB/s)(1433MiB/2969msec)
virtiofs:
read: IOPS=30.1k, BW=117MiB/s (123MB/s)(616MiB/5239msec)
write: IOPS=69.0k, BW=273MiB/s (287MB/s)(1433MiB/5239msec)
9pshare (default mount params):
read: IOPS=8544, BW=33.4MiB/s (34.0MB/s)(616MiB/18440msec)
write: IOPS=19.9k, BW=77.7MiB/s (81.5MB/s)(1433MiB/18440msec)
9pshare (msize=524288 and cache=none):
read: IOPS=8892, BW=34.7MiB/s (36.4MB/s)(616MiB/17719msec)
write: IOPS=20.7k, BW=80.8MiB/s (84.8MB/s)(1433MiB/17719msec)
smb:
read: IOPS=7853, BW=30.7MiB/s (32.2MB/s)(616MiB/20063msec)
write: IOPS=18.3k, BW=71.4MiB/s (74.9MB/s)(1433MiB/20063msec)
sshfs:
read: IOPS=4643, BW=18.1MiB/s (19.0MB/s)(616MiB/33935msec)
write: IOPS=10.8k, BW=42.2MiB/s (44.3MB/s)(1433MiB/33935msec)
I'm using this work around for podman
on Mac
podman machine init
podman machine start
podman machine --log-level=debug ssh -- exit 2>&1 | grep Executing
# copy ssh command from above output
# time="2021-09-15T09:02:09-04:00" level=debug msg="Executing: ssh [-i /Users/<user>/.ssh/podman-machine-default -p 49671 core@localhost -o UserKnownHostsFile /dev/null -o StrictHostKeyChecking no exit]\n"
ssh -i /Users/jjacob/.ssh/podman-machine-default -R 10000:$(hostname):22 -p 49671 core@localhost
ssh-keygen -t rsa
ssh-copy-id -p 10000 <user>@127.0.0.1
sudo mkdir -p /mnt/Users/<user>/Documents/workspace
sudo chown core:core /mnt/Users/<user>/Documents/workspace
sshfs -p 10000 <user>@127.0.0.1:/Users/<user>/Documents/workspace /mnt/Users/<user>/Documents/workspace
df -k | grep mnt
# leave terminal running
cd /Users/<user>/Documents/workspace/nginx
podman run -d --rm --name nginx -v /mnt$(pwd):/source nginx:latest
podman exec -it nginx sh
cd /source
ls -l
exit
podman stop nginx
SSHFS would probably got better results but it has to encrypt which is noticable in VM's CPU usage during test
You might be able to tweak this, by picking another algorithm/crypto.
SSHFS would probably got better results but it has to encrypt which is noticable in VM's CPU usage during test
You might be able to tweak this, by picking another algorithm/crypto.
True, however usually the fastest SSHFS algorithms are the weak ones no longer supported by the host. I would rather not compromise security of sshd for the sake of fast mounts.
I think for podman-desktop there are three possibly viable options here if I understand correctly.
In theory both SMB and SSHFS could be implemented and default to single one at first and give user the option to switch when running podman machine init ? virtfs-9p is not bad, WSL is using it. I guess there's only question about it's stability and performance on Mac
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind feature
Description
Currently, looks like in podman's remote client case, bind volume will use source folders on podman server's host, could we support (or give some configurations to choose) using source folders on podman client's host?
podman client and podman server may run in different hosts, or one runs in the host, the other runs in a VM. Have the ability to mount folders on the client side might be able to open more use cases...