containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.79k stars 2.42k forks source link

Cannot run container when `machine init` use custom `image-path` params. #18913

Closed BlackHole1 closed 1 year ago

BlackHole1 commented 1 year ago

Issue Description

Cannot run container when machine init use custom image-path params.

When I do not pass the image-path argument, Podman will automatically download Fedora CoreOS and can run correctly.

I am building Podman locally using the latest commit: 719e322`.

Steps to reproduce the issue

Steps to reproduce the issue

  1. ./bin/darwin/podman --debug machine init oomol --image-path /Users/black-hole/Downloads/fedora-coreos-38.20230609.2.1-qemu.x86_64.qcow2.xz -v /Users:/Users -v /private:/private -v /Applications:/Applications -v /tmp:/tmp
  2. ./bin/darwin/podman --debug start machine oomol
  3. ./bin/darwin/podman --debug pull busybox
  4. ./bin/darwin/podman --debug run -it --rm busybox sh

Describe the results you received

DEBU[0000] ExitCode msg: "preparing container 38512a4000c933e9294673faaa9b5fc4faacb26b54b1b9531d797998470b7f38 for attach: container create failed (no logs from conmon): conmon bytes \"\": readobjectstart: expect { or n, but found \x00, error found in #0 byte of ...||..., bigger context ...||..."
DEBU[0000] DoRequest Method: DELETE URI: http://d/v4.6.0/libpod/containers/38512a4000c933e9294673faaa9b5fc4faacb26b54b1b9531d797998470b7f38
Error: preparing container 38512a4000c933e9294673faaa9b5fc4faacb26b54b1b9531d797998470b7f38 for attach: container create failed (no logs from conmon): conmon bytes "": readObjectStart: expect { or n, but found , error found in #0 byte of ...||..., bigger context ...||...

Describe the results you expected

success run container

podman info output

host:
  arch: amd64
  buildahVersion: 1.30.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc38.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 94.78
    systemPercent: 3.42
    userPercent: 1.8
  cpus: 1
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: coreos
    version: "38"
  eventLogger: journald
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
    uidmap:
    - container_id: 0
      host_id: 502
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
  kernel: 6.3.6-200.fc38.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 1278246912
  memTotal: 2048794624
  networkBackend: netavark
  networkBackendInfo:
    backend: ""
    dns: {}
  ociRuntime:
    name: crun
    package: crun-1.8.5-1.fc38.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.5
      commit: b6f80f766c9a89eb7b1440c0a70ab287434b17ed
      rundir: /run/user/502/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: true
    path: /run/user/502/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-12.fc38.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 0
  swapTotal: 0
  uptime: 0h 10m 43.00s
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /var/home/core/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/core/.local/share/containers/storage
  graphRootAllocated: 106769133568
  graphRootUsed: 2342060032
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 1
  runRoot: /run/user/502/containers
  transientStore: false
  volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
  APIVersion: 4.5.1
  Built: 1685123928
  BuiltTime: Sat May 27 01:58:48 2023
  GitCommit: ""
  GoVersion: go1.20.4
  Os: linux
  OsArch: linux/amd64
  Version: 4.5.1

### Podman in a container

No

### Privileged Or Rootless

Rootless

### Upstream Latest Release

No

### Additional environment details

```shell
> qemu-system-x86_64 --version
QEMU emulator version 8.0.0
Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers

Additional information

 ./bin/darwin/podman --debug machine init oomol --image-path /Users/black-hole/Downloads/fedora-coreos-38.20230609.2.1-qemu.x86_64.qcow2.xz -v /Users:/Users -v /private:/private -v /Applications:/Applications -v /tmp:/tmp
INFO[0000] ./bin/darwin/podman filtering at log level debug
DEBU[0000] Using Podman machine with `qemu` virtualization provider
Extracting compressed file
Image resized.
Machine init complete
To start your machine run:

    podman machine start oomol

DEBU[0022] Called machine init.PersistentPostRunE(./bin/darwin/podman --debug machine init oomol --image-path /Users/black-hole/Downloads/fedora-coreos-38.20230609.2.1-qemu.x86_64.qcow2.xz -v /Users:/Users -v /private:/private -v /Applications:/Applications -v /tmp:/tmp)
DEBU[0022] Shutting down engines
./bin/darwin/podman --debug machine start oomol                                                                                                                                                                       
INFO[0000] ./bin/darwin/podman filtering at log level debug
DEBU[0000] Using Podman machine with `qemu` virtualization provider
Starting machine "oomol"
[/usr/local/bin/gvproxy -listen-qemu unix:///var/folders/zm/g19w916x2x36bt16htrwtsh80000gp/T/podman/qmp_oomol.sock -pid-file /var/folders/zm/g19w916x2x36bt16htrwtsh80000gp/T/podman/oomol_proxy.pid -ssh-port 53045 -forward-sock /Users/black-hole/.local/share/containers/podman/machine/qemu/podman.sock -forward-dest /run/user/502/podman/podman.sock -forward-user core -forward-identity /Users/black-hole/.ssh/oomol --debug]
DEBU[0000] qemu cmd: [/Applications/OomolStudio.app/Contents/Resources/container/qemu/bin/qemu-system-x86_64 -m 2048 -smp 1 -fw_cfg name=opt/com.coreos/config,file=/Users/black-hole/.config/containers/podman/machine/qemu/oomol.ign -qmp unix:/var/folders/zm/g19w916x2x36bt16htrwtsh80000gp/T/podman/qmp_oomol.sock,server=on,wait=off -netdev socket,id=vlan,fd=3 -device virtio-net-pci,netdev=vlan,mac=5a:94:ef:e4:0c:ee -device virtio-serial -chardev socket,path=/var/folders/zm/g19w916x2x36bt16htrwtsh80000gp/T/podman/oomol_ready.sock,server=on,wait=off,id=aoomol_ready -device virtserialport,chardev=aoomol_ready,name=org.fedoraproject.port.0 -pidfile /var/folders/zm/g19w916x2x36bt16htrwtsh80000gp/T/podman/oomol_vm.pid -machine q35,accel=hvf:tcg -cpu host -virtfs local,path=/Users,mount_tag=vol0,security_model=none -virtfs local,path=/private,mount_tag=vol1,security_model=none -virtfs local,path=/Applications,mount_tag=vol2,security_model=none -virtfs local,path=/tmp,mount_tag=vol3,security_model=none -drive if=virtio,file=/Users/black-hole/.local/share/containers/podman/machine/qemu/oomol_fedora-coreos-38.20230609.2.1-qemu.x86_64.qcow2 -fw_cfg name=opt/com.coreos/environment,string=ZnRwX3Byb3h5PSJodHRwOi8vMTI3LjAuMC4xOjg4ODgifG5vX3Byb3h5PSJsb2NhbGhvc3QsMTI3LjAuMC4xLGxvY2FsYWRkcmVzcywubG9jYWxkb21haW4uY29tInxGVFBfUFJPWFk9Imh0dHA6Ly8xMjcuMC4wLjE6ODg4OCI=]
Waiting for VM ...
Mounting volume... /Users:/Users
DEBU[0113] Executing: ssh [-i /Users/black-hole/.ssh/oomol -p 53045 core@localhost -o StrictHostKeyChecking=no -o LogLevel=ERROR -o SetEnv=LC_ALL= -q -- sudo chattr -i / ; sudo mkdir -p /Users ; sudo chattr +i / ;]
DEBU[0113] Executing: ssh [-i /Users/black-hole/.ssh/oomol -p 53045 core@localhost -o StrictHostKeyChecking=no -o LogLevel=ERROR -o SetEnv=LC_ALL= -q -- sudo mount -t 9p -o trans=virtio vol0 /Users -o version=9p2000.L,msize=131072]
Mounting volume... /private:/private
DEBU[0114] Executing: ssh [-i /Users/black-hole/.ssh/oomol -p 53045 core@localhost -o StrictHostKeyChecking=no -o LogLevel=ERROR -o SetEnv=LC_ALL= -q -- sudo chattr -i / ; sudo mkdir -p /private ; sudo chattr +i / ;]
DEBU[0114] Executing: ssh [-i /Users/black-hole/.ssh/oomol -p 53045 core@localhost -o StrictHostKeyChecking=no -o LogLevel=ERROR -o SetEnv=LC_ALL= -q -- sudo mount -t 9p -o trans=virtio vol1 /private -o version=9p2000.L,msize=131072]
Mounting volume... /Applications:/Applications
DEBU[0114] Executing: ssh [-i /Users/black-hole/.ssh/oomol -p 53045 core@localhost -o StrictHostKeyChecking=no -o LogLevel=ERROR -o SetEnv=LC_ALL= -q -- sudo chattr -i / ; sudo mkdir -p /Applications ; sudo chattr +i / ;]
DEBU[0115] Executing: ssh [-i /Users/black-hole/.ssh/oomol -p 53045 core@localhost -o StrictHostKeyChecking=no -o LogLevel=ERROR -o SetEnv=LC_ALL= -q -- sudo mount -t 9p -o trans=virtio vol2 /Applications -o version=9p2000.L,msize=131072]
Mounting volume... /tmp:/tmp
DEBU[0115] Executing: ssh [-i /Users/black-hole/.ssh/oomol -p 53045 core@localhost -o StrictHostKeyChecking=no -o LogLevel=ERROR -o SetEnv=LC_ALL= -q -- sudo chattr -i / ; sudo mkdir -p /tmp ; sudo chattr +i / ;]
DEBU[0116] Executing: ssh [-i /Users/black-hole/.ssh/oomol -p 53045 core@localhost -o StrictHostKeyChecking=no -o LogLevel=ERROR -o SetEnv=LC_ALL= -q -- sudo mount -t 9p -o trans=virtio vol3 /tmp -o version=9p2000.L,msize=131072]

This machine is currently configured in rootless mode. If your containers
require root permissions (e.g. ports < 1024), or if you run into compatibility
issues with non-podman clients, you can switch using the following command:

    podman machine set --rootful oomol

API forwarding listening on: /Users/black-hole/.local/share/containers/podman/machine/qemu/podman.sock

The system helper service is not installed; the default Docker API socket
address can't be used by podman. If you would like to install it run the
following commands:

    sudo /Users/black-hole/Code/Job/oomol/podman/podman/bin/darwin/podman-mac-helper install
    podman machine stop oomol; podman machine start oomol

You can still connect Docker API clients by setting DOCKER_HOST using the
following command in your terminal session:

    export DOCKER_HOST='unix:///Users/black-hole/.local/share/containers/podman/machine/qemu/podman.sock'

Machine "oomol" started successfully
DEBU[0116] Called machine start.PersistentPostRunE(./bin/darwin/podman --debug machine start oomol)
DEBU[0116] Shutting down engines
./bin/darwin/podman --debug pull busybox                                                                                                                                                                              
INFO[0000] ./bin/darwin/podman filtering at log level debug
DEBU[0000] Called pull.PersistentPreRunE(./bin/darwin/podman --debug pull busybox)
DEBU[0000] SSH Ident Key "/Users/black-hole/.ssh/oomol" SHA256:IELM/jdsKNns2iXpITpWa57KRJsC0eRhd1UEUkRshkw ssh-ed25519
DEBU[0000] DoRequest Method: GET URI: http://d/v4.6.0/libpod/_ping
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Found credentials for ghcr.io in credential helper containers-auth.json in file /Users/black-hole/.config/containers/auth.json
DEBU[0000] No credentials matching docker.io found in /Users/black-hole/.config/containers/auth.json
DEBU[0000] No credentials matching docker.io found in /Users/black-hole/.config/containers/auth.json
DEBU[0000] Found an empty credential entry "https://index.docker.io/v1/" in "/Users/black-hole/.docker/config.json" (an unhandled credential helper marker?), moving on
DEBU[0000] No credentials matching docker.io found in /Users/black-hole/.dockercfg
DEBU[0000] No credentials for docker.io found
DEBU[0000] DoRequest Method: POST URI: http://d/v4.6.0/libpod/images/pull
Resolved "busybox" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull docker.io/library/busybox:latest...
Getting image source signatures
Copying blob sha256:71d064a1ac7d46bdcac82ea768aba4ebbe2a05ccbd3a4a82174c18cf51b67ab7
Copying config sha256:b539af69bc01c6c1c1eae5474a94b0abaab36b93c165c0cf30b7a0ab294135a3
Writing manifest to image destination
Storing signatures
b539af69bc01c6c1c1eae5474a94b0abaab36b93c165c0cf30b7a0ab294135a3
DEBU[0004] Called pull.PersistentPostRunE(./bin/darwin/podman --debug pull busybox)
DEBU[0004] Shutting down engines
./bin/darwin/podman --debug run -it --rm busybox sh                                                                                                                                                                   
INFO[0000] ./bin/darwin/podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(./bin/darwin/podman --debug run -it --rm busybox sh)
DEBU[0000] SSH Ident Key "/Users/black-hole/.ssh/oomol" SHA256:IELM/jdsKNns2iXpITpWa57KRJsC0eRhd1UEUkRshkw ssh-ed25519
DEBU[0000] DoRequest Method: GET URI: http://d/v4.6.0/libpod/_ping
DEBU[0000] DoRequest Method: GET URI: http://d/v4.6.0/libpod/networks/pasta/exists
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Found credentials for ghcr.io in credential helper containers-auth.json in file /Users/black-hole/.config/containers/auth.json
DEBU[0000] No credentials matching docker.io found in /Users/black-hole/.config/containers/auth.json
DEBU[0000] No credentials matching docker.io found in /Users/black-hole/.config/containers/auth.json
DEBU[0000] Found an empty credential entry "https://index.docker.io/v1/" in "/Users/black-hole/.docker/config.json" (an unhandled credential helper marker?), moving on
DEBU[0000] No credentials matching docker.io found in /Users/black-hole/.dockercfg
DEBU[0000] No credentials for docker.io found
DEBU[0000] DoRequest Method: POST URI: http://d/v4.6.0/libpod/images/pull
DEBU[0000] DoRequest Method: GET URI: http://d/v4.6.0/libpod/images/busybox/json
DEBU[0000] DoRequest Method: POST URI: http://d/v4.6.0/libpod/containers/create
DEBU[0000] Enabling signal proxying
DEBU[0000] Enabling signal proxying
DEBU[0000] DoRequest Method: GET URI: http://d/v4.6.0/libpod/containers/38512a4000c933e9294673faaa9b5fc4faacb26b54b1b9531d797998470b7f38/json
DEBU[0000] DoRequest Method: POST URI: http://d/v4.6.0/libpod/containers/38512a4000c933e9294673faaa9b5fc4faacb26b54b1b9531d797998470b7f38/attach
DEBU[0000] ExitCode msg: "preparing container 38512a4000c933e9294673faaa9b5fc4faacb26b54b1b9531d797998470b7f38 for attach: container create failed (no logs from conmon): conmon bytes \"\": readobjectstart: expect { or n, but found \x00, error found in #0 byte of ...||..., bigger context ...||..."
DEBU[0000] DoRequest Method: DELETE URI: http://d/v4.6.0/libpod/containers/38512a4000c933e9294673faaa9b5fc4faacb26b54b1b9531d797998470b7f38
Error: preparing container 38512a4000c933e9294673faaa9b5fc4faacb26b54b1b9531d797998470b7f38 for attach: container create failed (no logs from conmon): conmon bytes "": readObjectStart: expect { or n, but found , error found in #0 byte of ...||..., bigger context ...||...
DEBU[0000] Shutting down engines
Luap99 commented 1 year ago

Did you only change the image? Because you cannot mount -v /tmp:/tmp, this is know to be broken: https://github.com/containers/podman/issues/18230

BlackHole1 commented 1 year ago

The Fedora CoreOS automatically downloaded by Podman is fedora-coreos-38.20230609.2.1-qemu.x86_64.qcow. And I also used fedora-coreos-38.20230609.2.1-qemu.x86_64.qcow when use image-path.

BlackHole1 commented 1 year ago

Did you only change the image? Because you cannot mount -v /tmp:/tmp, this is know to be broken: #18230

Thank you for your reply. I try it for now

BlackHole1 commented 1 year ago

@Luap99 Following your advice, we have successfully run. Thank you!

BTW, I would like to know how to obtain detailed logs like the ones you provided in this comment: https://github.com/containers/podman/issues/18230#issuecomment-1554768539