containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.07k stars 2.35k forks source link

Support native source folder for volume mount in remote model #8016

Closed jeffwubj closed 2 years ago

jeffwubj commented 3 years ago

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind feature

Description

Currently, looks like in podman's remote client case, bind volume will use source folders on podman server's host, could we support (or give some configurations to choose) using source folders on podman client's host?

podman client and podman server may run in different hosts, or one runs in the host, the other runs in a VM. Have the ability to mount folders on the client side might be able to open more use cases...

afbjorklund commented 3 years ago

Sounds good to me.

Not sure if ssh security is that important when accessing localhost, or if any of these remote filesystems are "fast", but anyway...


I used virtfs because it was easier and more built-in to qemu, and also because it promised higher performance than nfs/smb.

https://github.com/containers/podman/pull/11454#issuecomment-917291188

Note: You can run podman through lima already today, which uses reverse sshfs by default (but might change to 9p or smb).

https://github.com/containers/podman/issues/11533#issuecomment-917577552

From your tests and from development it seems like using vsock like gvisor-tap-proxy and virtio-fs will be "the future".

But the stability of those, makes me wonder if they are ready to replace slirp and virtfs that seem to be working fine ?

chris-forbes commented 2 years ago

I'm using this work around for podman on Mac

Setup

podman machine init
podman machine start
podman machine --log-level=debug ssh -- exit 2>&1 | grep Executing
# copy ssh command from above output
# time="2021-09-15T09:02:09-04:00" level=debug msg="Executing: ssh [-i /Users/<user>/.ssh/podman-machine-default -p 49671 core@localhost -o UserKnownHostsFile /dev/null -o StrictHostKeyChecking no exit]\n"

ssh -i /Users/jjacob/.ssh/podman-machine-default -R 10000:$(hostname):22 -p 49671 core@localhost
ssh-keygen -t rsa
ssh-copy-id -p 10000 <user>@127.0.0.1
sudo mkdir -p /mnt/Users/<user>/Documents/workspace
sudo chown core:core /mnt/Users/<user>/Documents/workspace
sshfs -p 10000 <user>@127.0.0.1:/Users/<user>/Documents/workspace /mnt/Users/<user>/Documents/workspace
df -k | grep mnt
# leave terminal running

Run container with volume mount

cd /Users/<user>/Documents/workspace/nginx
podman run -d --rm --name nginx -v /mnt$(pwd):/source nginx:latest
podman exec -it nginx sh
cd /source
ls -l
exit
podman stop nginx

hey @jeesmon

I managed to get this mostly working but I had some permission issues, I was trying to mount a Wordpress plugins folder to a local drive for development but It kept getting permission errors with chmod core:core -R /mnt/Users/<user>/ I don't suppose you've hit that at all have you?

jeesmon commented 2 years ago

hey @jeesmon

I managed to get this mostly working but I had some permission issues, I was trying to mount a Wordpress plugins folder to a local drive for development but It kept getting permission errors with chmod core:core -R /mnt/Users/<user>/ I don't suppose you've hit that at all have you?

@chris-forbes Try to see what uid/gid your container is running as and adjust the ownership of your plugins folder for that uid/gid on your local drive.

Update:

Reading your comment again, I'm not sure I understood your problem right :) Please reach out to me through email and will try to help you out.

kaleal commented 2 years ago

The problem of nfs, samba, and others is the need of extra configuration. Vagrant uses rsync to keep local and remote filesystems in sync. I think this is the way to go, since it uses ssh for transferring files between local and remote filesystem and no extra configuration would be needed.

afbjorklund commented 2 years ago

The problem of nfs, samba, and others is the need of extra configuration.

I think all these systems set up the local file server for the user, (more or less) transparent.

The problem here is that this story got involved in two different use cases...

One is the true remote like we had with podman-machine (v1) and Docker Machine, and there we did use scp and rsync.

The other is the hidden VM in podman machine (v3) and Docker Desktop, where we fake a local file system using 9p etc.

Using NFS or SMB is somewhere "in between", traditional network filesystem.


EDIT: This was when talking about the QEMU drivers, the default was VirtualBox and it did have the "shared folders" The default was to export (or "share") your entire home directory to the virtual machine, for mounting into containers.

Then we have "sshocker", which does offer the same functionality but using reverse sshfs rather than qemu's virtfs. When using Lima, it mounts your home directory read-only by default. So you can access files, but not edit them.

afbjorklund commented 2 years ago

I added a volume type and a mount type to the PR in containers/podman#11454, so that one could extend it with reverse sshfs.

There is some code in https://github.com/lima-vm/sshocker/blob/master/pkg/reversesshfs/reversesshfs.go

Basically one has to start a SFTP server on the host.

And then connect to it on VM, using a FUSE file system:

sshocker -v .:/mnt/sshfs user@example.com

But currently it cannot reach 192.168.127.1. (not needed)

Just needs a SSH config file, for the podman "connection".

The sshfs runs over the same ssh connection, using -o slave.

afbjorklund commented 2 years ago

Here is the volume implementation using sshocker (and sshfs): https://github.com/containers/podman/compare/main...afbjorklund:sshfs-volumes

Currently it has an issue with the CoreOS symlinks, so you need to use the canonical path: /var/mnt.

podman machine init -v /Users:/var/mnt/Users

Since it uses ssh, it will also work over long distances as compared to the other solutions built into qemu.

I have not tried it on a Mac yet, but I believe that it would work (based on the fact that Lima works fine)


The ssh config could probably be moved outside of this, it is quite useful also for accessing the VM:

Host podman-machine-default
    IdentityFile /home/anders/.ssh/podman-machine-default
    User core
    Hostname localhost
    Port 40831
    UserKnownHostsFile /dev/null
    StrictHostKeyChecking no

It hides all the implementation details used by podman machine ssh, for use with ssh/scp directly:

ssh -F /home/anders/.config/containers/podman/machine/qemu/podman-machine-default.config podman-machine-default
afbjorklund commented 2 years ago

Choosing which volume driver to use would need some better configuration. Currently it is hard-coded...

const (
       VolumeTypeVirtfs  = "virtfs"
       VolumeTypeSshfs   = "sshfs"
 )

The volume driver in turn chooses which mount type that is used, when the VM creates the mount on start.

const (
       MountType9p       = "9p"
       MountTypeSSHocker = "sshocker"
 )

mkdir /tmp/foo
touch /tmp/foo/bar
podman machine init -v /tmp/foo:/var/mnt/foo:rw
podman machine start
podman machine ssh
[core@localhost ~]$ ls /mnt/foo
bar
[core@localhost ~]$ findmnt /mnt/foo
TARGET       SOURCE FSTYPE OPTIONS
/var/mnt/foo vol0   9p     rw,relatime,sync,dirsync,access=client,msize=131072,trans=virtio
[core@localhost ~]$ ls /mnt/foo
bar
[core@localhost ~]$ findmnt /mnt/foo
TARGET       SOURCE    FSTYPE     OPTIONS
/var/mnt/foo :/tmp/foo fuse.sshfs rw,nosuid,nodev,relatime,user_id=1000,group_id=1000

I will leave the NFS and SMB volume drivers as an exercise for the reader...

But "virtiofs" would be interesting: https://virtio-fs.gitlab.io/howto-qemu.html

afbjorklund commented 2 years ago

For now I would recommend using lima instead of podman machine, since it handles both volumes and sockets.

limactl start https://raw.githubusercontent.com/lima-vm/lima/master/examples/podman.yaml

export CONTAINER_HOST=unix://$HOME/podman.sock

export DOCKER_HOST=unix://$HOME/podman.sock


Currently it doesn't work with rootless containers though, so you would have to use sudo podman (or similar).

Error: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/home/anders/lima/hello" to rootfs at "/hello" caused: operation not permitted: OCI permission denied

EDIT: It doesn't work with runc, so you have to install crun.

limactl shell podman sudo apt install -y crun

Reported as:

willcohen commented 2 years ago

Already mentioned in containers/podman#11454, but just for watchers here, still working on pushing 9p darwin upstream to qemu: https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg04325.html. I'll probably need to go through a third version of the patch, and 6.2 (December release) is already well into release candidates, but hopefully that makes it in by 7.0 in early-ish 2022.

wayneeseguin commented 2 years ago

@willcohen Update much appreciated, thank you!!! Is there any documentation on how to apply the patches so we can help test it out?

willcohen commented 2 years ago

If you compile qemu with those patches and have it be the qemu used on your path, you can compile podman per the PR at https://github.com/containers/podman/pull/11454#issuecomment-917296954 and basically duplicate what @afbjorklund does in this screenshot. If you can get a folder to mount from macOS and see it in the podman machine, then you should be good to go! Duplicating that result plus passing QEMU CI has been my goal to date for the patch set I’m working on upstream! @wayneeseguin

ghost commented 2 years ago

Stumbled upon this and I'm very interested in checking out that qemu patch for p9 support. For podman volumes I have been testing out using github.com/pkg/sftp to dynamically present paths in the fuse mount point. So no need for multiple connections, managing mounts, or anything crazy.

However I really want p9 support in qemu, because I have been looking for a way to also do something similar to Firecracker cross-platform, like this.

ghost commented 2 years ago

I think for this to be viable, permissions need to work in a predictable way.

~In testing the patch for MacOS and it doesn't seem that the mapped-xattr or mapped-file values for security_model work. Has anyone else tested this out?~

Edit: Update regarding mapped-file and mapped-xattr. It is my belief that I missed something when compiling my linux kernel. Today I had to figure something out and had to change up my kernel settings. These features seem to work fine. Sorry for the false alarm there.

rarguello commented 2 years ago

No idea why, but the fuse-sshfs package was deprecated on RHEL 8.5:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/deprecated_functionality#deprecated-packages

RHEL 9 Beta or CentOS 9 Stream do not include fuse-sshfs.

afbjorklund commented 2 years ago

RHEL 9 Beta or CentOS 9 Stream do not include fuse-sshfs.

Previously it was found in either EPEL, or CentOS "PowerTools"...

fuse-sshfs-2.8-5.fc28.src.rpm

For the new distro, you have to use the Fedora SRPMS to rebuild:

fuse-sshfs-3.7.1-2.fc34.src.rpm

You can also use the binaries direcly.

But it's only needed on the remote.

ghost commented 2 years ago

@rarguello @afbjorklund might find copr useful. Here's some docs for it too.

afbjorklund commented 2 years ago

@rarguello @afbjorklund might find copr useful. Here's some docs for it too.

Would still need someone to maintain it, just like EPEL and PowerTools and whatnot.

And then it needs to be documented and configured, for all the users trying to use it ?

But preferrably it would offer support for all three in one place, similar to PPA on Ubuntu*.

* somewhat contrived analogy, since it doesn't need random third party repos for sshfs...

rarguello commented 2 years ago

More info on fuse-sshfs:

Bug 1758884 - sshfs needs to be rebuilt for EPEL-8 https://bugzilla.redhat.com/show_bug.cgi?id=1758884

afbjorklund commented 2 years ago

No idea where PowerTools went in 9-Stream, either.

afbjorklund commented 2 years ago

There doesn't seem to be any interest in adding any implementation for virtfs or sshfs, until design is settled.

So no support in Podman for remote mounting, until then... Users can provide their own solution, meanwhile.

afbjorklund commented 2 years ago

podman machine

Current solution requires modifying the path, since the root directory is read-only on CoreOS:

podman machine init -v /foo:/mnt/foo

podman --remote run -v /mnt/foo:/foo

This breaks compatibility with local version, that would do podman run -v /foo:/foo (no /mnt)

other systems

No issues with mounting, since the root directory can be modified to create the mountpoint dir:

podman machine init -v /foo:/foo --volume-driver=sshfs

podman --remote run -v /foo:/foo

This means one can use the same command remote as local, when using something like Fedora.

cgwalters commented 2 years ago

See https://github.com/coreos/rpm-ostree/issues/337 - I think eventually we will support this on CoreOS more cleanly, but for now there is a hack possible. You just have to be aware that data there will go away on upgrades, but it's fine for mount points.

afbjorklund commented 2 years ago

Good to know, then it should work better with the volumes. Must have changed during the months it was pending review

Typically /foo would be something like /Users.

shady333 commented 2 years ago

podman machine init -v /foo:/foo --volume-driver=sshfs

I'm getting an error for this command (on MacOS) Error: unknown shorthand flag: 'v' in -v

Any alternatives?

afbjorklund commented 2 years ago

There is no podman machine support for getting files from the client yet, so all files must be on the server VM already.

i.e. you need to patch podman:

it has not been merged yet

EDIT: it is merged now, but requires that qemu has support for virtfs: https://wiki.qemu.org/Documentation/9psetup

On Linux this works out-of-the-box, but on Darwin (Mac) it requires that qemu has been patched to support "virtfs"

See https://github.com/NixOS/nixpkgs/pull/122420

In development, upstream

ssbarnea commented 2 years ago

If we would have a mode that would do volume mounts when possible and fallback to (r)sync on remotes it would be extremely useful for testing code inside a container.

The number of case where a developer wants to test his (local) code remotely is growing fast. Some people have very thin local machines which cannot effectively test their changes, or would do it 10x slower than a remote server.

afbjorklund commented 2 years ago

You can do ssh connections (for scp and rsync) already, but the ssh configuration is hidden inside the "system connection"

podman system connection list

podman --log-level=debug machine ssh

it would be possible to show the ssh config: https://github.com/containers/podman/issues/8016#issuecomment-956478981. Then you can use all regular tools, with the remote system:

* See https://mutagen.io/documentation/introduction/getting-started for some detailed examples and documentation.

It has been implemented in limactl show-ssh for accessing those VMs, such as the ones running fedora-podman.

joes-myob commented 2 years ago

Interesting to read through these notes.

We've been experimenting with Podman as a replacement for Docker Desktop. Feels like that mounting feature is a tough feature to migrate off from. With out a lot of further rework to existing scripts etc.

Would really love to see this functionality in the future.

(I suspect others are discovering this more and more as Jan 31st approaches :) )

I'll keep an eye on this thread in the meantime.

georgettica commented 2 years ago

I might add the colima us almost there is volume support

Mounting works cleanly but the folder permissions are those of the VM and not the container

rakodev commented 2 years ago

I'm currently using this alias locally (Mac):

alias aws='docker run -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN --rm -ti -v ~/.aws:/root/.aws -v $(pwd):/aws amazon/aws-cli'

when I do podman instead of docker, it fails, even when I do only this it fails randomly for some timeout error message.

alias aws='podman run -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN --rm -ti amazon/aws-cli'

Any idea? I tried few things, reboot... still the same errors.

rhatdan commented 2 years ago

Podman on a MAC currently does not support mounting volumes from the host into the container via the VM. It is being worked on.

joes-myob commented 2 years ago

I appreciate the work that is going to these alternative tooling to Docker Desktop - As a signpost to others - it might be better to watch this thread for updates as to when this functionality arrives rather than saying you have this problem.

Its not a problem, because its not a supported feature.

Thanks for the work @rhatdan (and others) 👍

rakodev commented 2 years ago

I appreciate the work that is going on here, I hope it'll be usefull for all of us, and if I can bring anything, even a comment with exemple I'll do it.

@joes-myob ,

First of all, I did read the doc and look for my first issue on google, and I didn't find mention about podman don't support mounting in Mac.

Secondly, I added a second problem, that is not related to the mounting issue, I was just wondering if I miss something, I started to try podman very soon.

Third, instead of asking people to not define their issues here, maybe you could help?

Sorry for the inconvenient.

joes-myob commented 2 years ago

It's not super clear @rakodev but it is in the documentation https://docs.podman.io/en/latest/markdown/podman-run.1.html#volume-v-source-volume-host-dir-container-dir-options

Note when using the remote client, the volumes will be mounted from the remote server, not necessarily the client machine.

@rakodev - I think if you have an issue thats not part of the topic it should be in another topic rather than conflating and cinfusing this issue.

I cannot help as I am not a maintainer and your bug report lacks much of the detail needed for anyone to help debug.

Honestly I was not trying to tell you off. I was trying to notify people in the future that providing examples on how the remote volume mount isn't working is not helpful because you're asking for something to be fixed that does not exist.

I hope that makes sense. :)

rakodev commented 2 years ago

@joes-myob , I didn't asked something to be fixed, read my message again.

And to be honest I understand your motivation, but I still think your comment didn't help me or anyone else in the thread except you mentioned that mount is not supported on Mac, and if this last point help anyone, it means that my question was legit.

Thanks for the link, as you said it's not super clear, and probably people that have the same errors as me will find a response here.

Have a nice day.

afbjorklund commented 2 years ago

I might add the colima us almost there is volume support

When using podman machine, you need to set it up manually...

There are instructions on https://github.com/lima-vm/sshocker

$ sshocker -v .:/mnt/sshfs user@example.com

Eventually podman machine will have "native" support for VM volumes, when running the CoreOS VM on your local host.

But when running podman towards a remote host, you need something like this "reverse sshfs" to mount the files remotely.

The two use cases are somewhat confused, in this podman issue ?

The two different type of volumes also seem to be causing confusion.


The container volumes (mounts) are the ones documented under podman run.

These VM volumes are more like: https://docs.docker.com/desktop/mac/#file-sharing

By default the /Users, /Volume, /private, /tmp and /var/folders directory are shared.

There is no default with Podman (yet?), so everything needs to be shared explicitely.

PayBas commented 2 years ago

Very keen to see this implemented for local host with podman machine.

We're currently working around this with scripting.

#!/usr/bin/env bash

brew install podman
podman machine init
podman machine start

if ! podman machine list | grep -q running; then
  echo "Podman machine failed to start!"
  exit 1
fi

set -e

connection=$(podman machine --log-level=debug ssh -- exit 2>&1 | grep Executing | sed -E 's/.*ssh \[(.*)\].*/\1/')
sshHost=$(echo "$connection" | sed -E 's/.* (.+@localhost) .*/\1/')
sshPort=$(echo "$connection" | sed -E 's/.*-p ([0-9]+) .*/\1/')
sshIdent=$(echo "$connection" | sed -E 's/.*-i ([^[:space:]]+) .*/\1/')
echo "sshHost  = $sshHost"
echo "sshPort  = $sshPort"
echo "sshIdent = $sshIdent"

podman machine ssh mkdir -p /tmp/workspace
podman pull nexus.corp:5000/stuff-generator:latest
podman run --rm -v /tmp/workspace:/home/worker/workspace:Z nexus.corp:5000/stuff-generator:latest

scp -r -i $sshIdent -P $sshPort -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $sshHost:/tmp/workspace/ $HOME/mywork

Having to extract the connection details for scp is kinda painful, but it works I guess.

ghost commented 2 years ago

Just throwing this note here for awareness. It's something that haunts my dreams and I'm actively writing something for it.

Migrating from Docker-Desktop to podman is still rough if the workload expect volumes to contain any fifo or socket files. Few fusefs network mounts seem to even permit creating them, whereas the file systems in Docker-Desktop does. NFS supports them in like manner that is pretty much drop-in compatible, but I suspect there to be IO task type issues that might not be 1-for-1 compatible.

afbjorklund commented 2 years ago

We're currently working around this with scripting.

With the current prototype CLI, that would be shared as:

podman machine init -v $HOME/mywork:/tmp/workspace

The shell hacks instead of "podman machine scp" are painful.


EDIT: init --volume is currently broken in v4.0.0-rc1, hangs on start

Hopefully it can be fixed before the 4.0 release, it should only wait on ssh...

Added a PR for it, there was a "while not running" turned into "while running"

jescobar commented 2 years ago

Might this can help, I found this nice post that have a work around, I have tested and it worked https://dalethestirling.github.io/Macos-volumes-with-Podman/

afbjorklund commented 2 years ago

Might this can help, I found this nice post that have a work around

The workaround (using reverse-sshfs) helps, it is just somewhat cumbersome and ugly to use... Having to leave the ssh connection open is also something of a "problem", compared to virtfs*.

* handled by qemu

But it is implemented in lima (using lima-hostagent/lima-useragent), so it definitely works. As noted above, it is packaged in sshocker so all that is needed is to get the ssh config.

https://github.com/containers/podman/issues/8016#issuecomment-998777418

sshocker -v $HOME/mywork:/tmp/workspace podman-machine-default
jorhett commented 2 years ago

@afbjorklund can I get a summary update on this? Is it perhaps going to ship as part of podman 4.0, or just waiting for podman 4.0 to deliver things this needs for a later inclusion?

windmaple commented 2 years ago

+1

afbjorklund commented 2 years ago

can I get a summary update on this?

As far as I know there is nothing pending in podman, but if you want to use "virtfs" you need a qemu version that supports it...

It is available on Linux, but it is not available on Mac in the version that you get from brew install (qemu 6.2.0)

I'm not sure that the feature will be available before podman 4.0.0 (in February), maybe as a custom brew formula ?

@baude @ashley-cui might know more, about what is being included in Podman Desktop 4.0. But I don't have anything more.

enesgur commented 2 years ago

I'm not sure but maybe we can use Lima and Podman together. Because Lima supports file sharing on mac os. I'll try it.

afbjorklund commented 2 years ago

I'm not sure but maybe we can use Lima and Podman together. Because Lima supports file sharing on mac os. I'll try it.

It works just fine, there is a "podman.yaml" in the examples folder that sets up the podman.sock. limactl start https://raw.githubusercontent.com/lima-vm/lima/master/examples/podman.yaml

I made some alternative files for Fedora, as well. If you have a problem with running Ubuntu, that is. The file sharing in lima is the same as in sshocker (i.e. reverse sshfs), which is why I suggested it.

It also works in the generic ssh case, which is what this issue is about. Podman Machine and Podman Desktop has others.

sshocker

https://github.com/lima-vm/sshocker $ sshocker -v ~/src:/mnt/src user@example.com


Same approach as suggested above, from https://github.com/dustymabe/vagrant-sshfs

https://github.com/containers/podman/issues/8016#issuecomment-843546053

Easier to get ssh access, than to get nfs networking working over firewalls and whatnot.

But you still want something built-in to a VM solution, like "virtfs" or "virtio-fs" (or "vboxsf"...)

jorhett commented 2 years ago

As far as I know there is nothing pending in podman, but if you want to use "virtfs" you need a qemu version that supports it...

It is available on Linux, but it is not available on Mac in the version that you get from brew install (qemu 6.2.0)

This is the part I'm trying to untangle. I'm already following most of these PRs but trying to untangle the current state.

Does this mean that if I install the qemu branch with this support, podman will use it as-is on the mac? Or is there an alternate branch?

In other words, is there a 1-2-3-4 to work with virtfs support on macOS ? I know this is in-dev + not-supported, but basically curious if this is something we can try out today?

It's clear that lima offers an alternative, but I'm wondering what we can ~use~ test for ourselves without lima?

afbjorklund commented 2 years ago

I don't have an M1 Mac myself, but I am planning to rent another one and update the qemu "9p-darwin" branch to 6.2.0.

https://github.com/containers/podman/pull/11454#issuecomment-914462925

https://github.com/afbjorklund/qemu/tree/9p-darwin

https://gitlab.com/wwcohen/qemu/-/tree/9p-darwin

mkdir build
cd build
../configure --target-list=aarch64-softmmu --enable-hvf --enable-virtfs
make

Probably not until next week though, and it was apparently Sep last time this was being tested. Hopefully remember something.

jorhett commented 2 years ago

I don't have an M1 Mac myself, but I am planning to rent another one

I'm happy to test on my M1 anything you'd like.