containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
22.38k stars 2.31k forks source link

Some images result in the error: copying system image from manifest list: writing blob: adding layer with blob: processing tar file(container ID 1000 cannot be mapped to a host ID): exit status 1 #22803

Open Zivodor opened 1 month ago

Zivodor commented 1 month ago

Issue Description

When attempting to create containers for some images the command fails with the error:

Error: copying system image from manifest list: writing blob: adding layer with blob "sha256:9f16480e2ff54481cb1ea1553429bf399e8269985ab0dec5b5af6f55ea747d3f": processing tar file(container ID 1000 cannot be mapped to a host ID): exit status 1

Steps to reproduce the issue

Steps to reproduce the issue

  1. Create a podman-compose file (provided below)
  2. Perform podman-compose up

Describe the results you received

You can see the logs here

Describe the results you expected

Dashy should be pulled down and started successfully.

podman info output

host:
  arch: amd64
  buildahVersion: 1.33.7
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.6+ds1-1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.6, commit: unknown'
  cpuUtilization:
    idlePercent: 99.76
    systemPercent: 0.06
    userPercent: 0.18
  cpus: 8
  databaseBackend: sqlite
  distribution:
    codename: bookworm
    distribution: debian
    version: "12"
  eventLogger: journald
  freeLocks: 2015
  hostname: project-hydra
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 6.1.0-21-amd64
  linkmode: dynamic
  logDriver: journald
  memFree: 15922044928
  memTotal: 16628264960
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns_1.4.0-3_amd64
      path: /usr/lib/podman/aardvark-dns
      version: aardvark-dns 1.4.0
    package: netavark_1.4.0-3_amd64
    path: /usr/lib/podman/netavark
    version: netavark 1.4.0
  ociRuntime:
    name: crun
    package: crun_1.8.1-1+deb12u1_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.1
      commit: f8a096be060b22ccd3d5f3ebe44108517fbf6c30
      rundir: /run/user/1001/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt_0.0~git20230309.7c7625d-1_amd64
    version: |
      pasta unknown version
      Copyright Red Hat
      GNU Affero GPL version 3 or later <https://www.gnu.org/licenses/agpl-3.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1001/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.0-1_amd64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 1023406080
  swapTotal: 1023406080
  uptime: 1h 13m 15.00s (Approximately 0.04 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
  - ghcr.io
store:
  configFile: /home/podman/.config/containers/storage.conf
  containerStore:
    number: 8
    paused: 0
    running: 1
    stopped: 7
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/podman/.local/share/containers/storage
  graphRootAllocated: 196682272768
  graphRootUsed: 9006194688
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 38
  runRoot: /run/user/1001/containers
  transientStore: false
  volumePath: /home/podman/.local/share/containers/storage/volumes
version:
  APIVersion: 4.9.4
  Built: 0
  BuiltTime: Wed Dec 31 17:00:00 1969
  GitCommit: ""
  GoVersion: go1.22.1
  Os: linux
  OsArch: linux/amd64
  Version: 4.9.4

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

No response

Additional information

I am setting up my first home server on Debian 12.5. I have updated my deps to allow me to use the latest podman and podman-compose. As a part of that process I have set myself some semi-arbitrary security rules, not for any one specific reason more so for the learning experience and to get myself immersed in resolving issues. Some of these rules (and the ones I think are the likely culprits) are:

1) All containers must be run rootlessly, no exceptions 2) All services must only be accessible through Wireguard VPN 3) All services must use subuids and subgids

So far, this has been going... well. I have these services running and working well in rootless containers:

I am able to connect to my VPN and am able to navigate to my services using the urls configured in Caddy (using self-signed certificates) and everything just works.

The next phase of this was to setup a dashboard service as I have this oldish touchscreen all-in-one PC that I plan to use as a sort of terminal in my kitchen. I looked at these possibilities, of which all of them result in the above error when I try to pull them.

When I try to create any of these, whether through podman directly or through podman-compose, it fails with the error:

Error: copying system image from manifest list: writing blob: adding layer with blob sha256:9f16480e2ff54481cb1ea1553429bf399e8269985ab0dec5b5af6f55ea747d3f": processing tar file(container ID 1000 cannot be mapped to a host ID): exit status 1

This is my compose file:

version: '3.8'

services:
  dashy:
    image: lissy93/dashy:latest
    container_name: dashy
    ports:
      - "8002:8080"
    volumes:
      - ./my-conf.yml:/app/user-data/conf.yml:Z
    restarts: unless-stopped

My subuid and subgid files look like this:

admin:100000:65536
podman:165536:65536

In every compose file I have specified a uidmap using x-podman. This has worked for everything so far. I have tried adding/removing this option from the dashy config and it did not change anything.

rhatdan commented 1 month ago

podman-compose is a different repo. If you have a simple reproducer for this with straight podman that would be very helpful, otherwise this issue should be transferred to podman-compose.

Zivodor commented 1 month ago

Regardless of whether I use podman or podman-compose it fails with the same error. I ran the compose with debug, extracted the command it had generated and tried running it manually and it resulted in the same error.

A full system reset for the root user and the rootless podman user did temporarily resolved the issue for me. I believe it's related to quadlets as I had created a .container file for my Wireguard container, and after disabling that I stopped running into the issue.

Zivodor commented 1 month ago

I also tried just calling podman pull against the image and it resulted in the same error.

rhatdan commented 1 month ago

@giuseppe PTAL

giuseppe commented 1 month ago

can you share the result of:

podman unshare cat /proc/self/uid_map

does it reflect the configuration you've in etc/subuid? If not, please run podman system migrate and try again, do you still get the same output?

Zivodor commented 1 month ago
podman@project-hydra:~$ podman unshare cat /proc/self/uid_map
         0       1001          1
         1     165536      65536

It is as expected. I should also note that it is not a subset of packages like I originally believed. When trying to resolve the issue I performed a podman system reset, which resolved it. After that, I enabled my wireguard.container service and tried to pull down an image that had previously worked, but it got the same error.

After I stopped the service, disabled it, then did another system reset, I was able to pull all the images successfully. As soon as I enable that service I start to get this issue persistently until I reset it. I am going to share that as well:

[Container]
AddCapability=NET_ADMIN NET_RAW
ContainerName=wireguard
Environment=SERVERURL=[Correct Local Ip] SERVERPORT=[Correct Port] PEERS=# PEERDNS=auto INTERNAL_SUBNET=10.10.0.0/24
GIDMap=0:1:50
Image=docker.io/linuxserver/wireguard
Label=io.podman.compose.config-hash=4a0e91e3ad5f9fcf67930731fbf4d771c1b5f0f38ea6c5811c12c502c1304d21 io.podman.compose.project=wireguard io.podman.compose.version=1.1.0 PODMAN_SYSTEMD_UNIT=podman-compose@wireguard.service com.docker.compose.project=wireguard com.docker.compose.project.working_dir=/home/podman/appdata/wireguard com.docker.compose.project.config_files=podman-compose.yml com.docker.compose.container-number=1 com.docker.compose.service=wireguard
Network=wireguard-network
PublishPort=[Correct Port]:51820/udp
Sysctl=net.ipv4.conf.all.src_valid_mark=1 net.ipv4.conf.all.forwarding=1
UIDMap=0:1:50
Volume=/home/podman/appdata/wireguard/config:/config:Z

[Service]
Restart=always

[Install]
WantedBy=default.target
Zivodor commented 1 month ago

Alright, I don't think it has anything to do with my .container file. I am running into the issue with or without that file there.

Zivodor commented 1 month ago

I'm fairly new to all this stuff, but at the very least I can tell you that a full podman system reset does not reliably fix it. I had to delete the /home/podman/.local/share/containers/ directory in order to resolve the issue while testing today

rsulli55 commented 3 weeks ago

I believe I am also running into the same or similar issue. I am running Fedora Server and have set up a few quadlets to run services as rootless containers. I also use UIDMap to keep the mappings across containers disjoint. Today, I was trying to update my audiobookshelf service and pull the updated image. Initially, I updated the quadlet file to use the new image, but restarting the service was failing with the processing tar file(container ID 1000 cannot be mapped to a host ID): exit status 1 error. I thought that meant I needed to update my UIDMap in some way, but I couldn't get it to work. Finally, I tried to simply pull the image and that also creates the error:

$ podman pull ghcr.io/advplyr/audiobookshelf:2.10.1
Trying to pull ghcr.io/advplyr/audiobookshelf:2.10.1...
Getting image source signatures
Copying blob 60dba4733d48 done   | 
Copying blob e376fac3bde8 done   | 
Copying blob a5edbc7b296b done   | 
Copying blob b404b3c3a52d done   | 
Copying blob d25f557d7f31 skipped: already exists  
Copying blob 549237b48d78 done   | 
Copying blob 579ced6f4ee6 done   | 
Copying blob 0f5e4b3bfe3a done   | 
Copying blob 017d1384d304 done   | 
Copying blob 6a5424a2a7f4 done   | 
Copying blob 2b7b2cbf90bf done   | 
Error: copying system image from manifest list: writing blob: adding layer with blob "sha256:a5edbc7b296b518501cd1ac08999e0e4e399c55370bbbf7b1369503bbeb8957c": processing tar file(container ID 1000 cannot be mapped to a host ID): exit status 1

I've found that this also happens on image version 2.10.0, but 2.9.0 is able to successfully pull.