containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.27k stars 2.37k forks source link

Volume mount goes away after installing qemu-user-static on macOS ARM #20820

Closed kgibm closed 10 months ago

kgibm commented 10 months ago

Issue Description

Create a podman machine with a volume mount and it's there. Then install qemu-user-static for cross-compilation, and the volume mount is gone.

Steps to reproduce the issue

Steps to reproduce the issue

  1. podman machine init --cpus 4 --memory 10240 --disk-size 100 -v $HOME:/mnt/host
  2. podman machine start
  3. Volume mount exists and shows expected files:
    $ podman machine ssh "ls /mnt/host | wc -l"
    20
  4. podman machine ssh "sudo rpm-ostree install qemu-user-static | grep -v 'Changes queued for next boot'; sudo systemctl reboot"
  5. Wait for reboot
  6. Volume mount is gone:
    $ podman machine ssh "ls /mnt/host | wc -l"
    0

Describe the results you received

Volume mount is gone

Describe the results you expected

Volume mount persists after installing qemu-user-static

podman info output

$ podman version
Client:       Podman Engine
Version:      4.8.0
API Version:  4.8.0
Go Version:   go1.21.4
Git Commit:   c4dfcf14874479e34b3f312f089fc5840e306258
Built:        Mon Nov 27 10:08:38 2023
OS/Arch:      darwin/arm64

Server:       Podman Engine
Version:      4.7.2
API Version:  4.7.2
Go Version:   go1.21.1
Built:        Tue Oct 31 09:30:33 2023
OS/Arch:      linux/arm64
$ podman info
host:
  arch: arm64
  buildahVersion: 1.32.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.8-2.fc39.aarch64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.8, commit: '
  cpuUtilization:
    idlePercent: 98.52
    systemPercent: 1.09
    userPercent: 0.39
  cpus: 4
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: coreos
    version: "39"
  eventLogger: journald
  freeLocks: 2048
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
    uidmap:
    - container_id: 0
      host_id: 501
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
  kernel: 6.5.11-300.fc39.aarch64
  linkmode: dynamic
  logDriver: journald
  memFree: 9950240768
  memTotal: 10396737536
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.8.0-1.fc39.aarch64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.8.0
    package: netavark-1.8.0-2.fc39.aarch64
    path: /usr/libexec/podman/netavark
    version: netavark 1.8.0
  ociRuntime:
    name: crun
    package: crun-1.11.1-1.fc39.aarch64
    path: /usr/bin/crun
    version: |-
      crun version 1.11.1
      commit: 1084f9527c143699b593b44c23555fb3cc4ff2f3
      rundir: /run/user/501/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20231004.gf851084-1.fc39.aarch64
    version: |
      pasta 0^20231004.gf851084-1.fc39.aarch64-pasta
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/501/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.2-1.fc39.aarch64
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 0
  swapTotal: 0
  uptime: 0h 2m 37.00s
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /var/home/core/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/core/.local/share/containers/storage
  graphRootAllocated: 106769133568
  graphRootUsed: 3039174656
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /run/user/501/containers
  transientStore: false
  volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
  APIVersion: 4.7.2
  Built: 1698762633
  BuiltTime: Tue Oct 31 09:30:33 2023
  GitCommit: ""
  GoVersion: go1.21.1
  Os: linux
  OsArch: linux/arm64
  Version: 4.7.2

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

Running latest podman from brew

Additional information

No response

kgibm commented 10 months ago

The mount is still in the JSON file:

$ grep -A 8 Mounts ~/.config/containers/podman/machine/qemu/podman-machine-default.json
 "Mounts": [
  {
   "ReadOnly": false,
   "Source": "/Users/kevin",
   "Tag": "vol0",
   "Target": "/mnt/host",
   "Type": "9p"
  }
 ],
kgibm commented 10 months ago

I found this in journalctl from the first run of the machine:

Nov 28 11:09:37 localhost.localdomain sudo[2307]:     core : PWD=/var/home/core ; USER=root ; COMMAND=/usr/bin/mount -t 9p -o trans=virtio vol0 /mnt/host -o version=9p2000.L,msize=131072

So I just manually re-ran that and it worked:

$ podman machine ssh "sudo /usr/bin/mount -t 9p -o trans=virtio vol0 /mnt/host -o version=9p2000.L,msize=131072"
$ podman machine ssh "ls /mnt/host | wc -l"
20

Then I stopped and started the machine, and the mount was still there. So this seems to be a permanent workaround.

Luap99 commented 10 months ago

If you reboot from with the VM it will clear the volume mount, you must use podman machine stop and podman machine start instead.

Duplicate of https://github.com/containers/podman/issues/15976