containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.83k stars 2.42k forks source link

weird error on image create from image #11023

Closed dwhiteddsoft closed 3 years ago

dwhiteddsoft commented 3 years ago

I was programmatically trying to get to the API and had some problems. Then I simply tried to curl the unix socket and got the same error. This really confuses me and was wondering if I could get some help. Thanks in advance.

curl -XPOST --unix-socket /run/user/1000/podman/podman.sock -v 'http://d/v1.0.0/images/create?fromImage=fakeacr.azurecr.io%2Fhelloserver:v2&credentials=fakeuid:fakepwd'
*   Trying /run/user/1000/podman/podman.sock:0...
* Connected to d (/run/user/1000/podman/podman.sock) port 80 (#0)
> POST /v1.0.0/images/create?fromImage=fakeacr.azurecr.io%2Fhelloserver:v2&credentials=fakeuid:fakepwd HTTP/1.1
> Host: d
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Api-Version: 1.40
< Libpod-Api-Version: 3.2.2
< Server: Libpod/3.2.2 (linux)
< Date: Thu, 22 Jul 2021 14:05:19 GMT
< Transfer-Encoding: chunked
< 
{"progressDetail":{},"error":"write /dev/stderr: input/output error\n"}
* Connection #0 to host d left intact

Podman info from CLI below:

host:
  arch: amd64
  buildahVersion: 1.21.0
  cgroupControllers: []
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.27, commit: '
  cpus: 2
  distribution:
    distribution: ubuntu
    version: "20.04"
  eventLogger: journald
  hostname: parallels-Parallels-Virtual-Platform
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.8.0-55-generic
  linkmode: dynamic
  memFree: 122478592
  memTotal: 4119932928
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version 0.20.1.5-925d-dirty
      commit: 0d42f1109fd73548f44b01b3e84d04a279e99d2e
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: 'slirp4netns: /usr/bin/slirp4netns'
    version: |-
      slirp4netns version 1.1.8
      commit: unknown
      libslirp: 4.3.1-git
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.4.3
  swapFree: 1686630400
  swapTotal: 2147479552
  uptime: 62h 51m 34.4s (Approximately 2.58 days)
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /home/parallels/.config/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 1
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: 'fuse-overlayfs: /usr/bin/fuse-overlayfs'
      Version: |-
        fusermount3 version: 3.9.0
        fuse-overlayfs: version 1.5
        FUSE library version 3.9.0
        using FUSE kernel interface version 7.31
  graphRoot: /home/parallels/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 26
  runRoot: /run/user/1000/containers
  volumePath: /home/parallels/.local/share/containers/storage/volumes
version:
  APIVersion: 3.2.2
  Built: 0
  BuiltTime: Wed Dec 31 19:00:00 1969
  GitCommit: ""
  GoVersion: go1.15.2
  OsArch: linux/amd64
  Version: 3.2.2
mheon commented 3 years ago

@jwhonce PTAL - I bet this is the progress bars for pull. Is there a query option to disable those?

dwhiteddsoft commented 3 years ago

While I might agree with you, the command should keep running no? when I do a

podman image list

some number of minutes later it does not appear :-(

mheon commented 3 years ago

Sounds like it's still pulling in the background, and holding the image lock while doing so. @vrothberg Concur?

This does mean that pull is not terminating on client disconnect, which is definitely a bug.

dwhiteddsoft commented 3 years ago

FYI if I do it from the CLI, everything is fine. I look forward to @vrothberg comments

vrothberg commented 3 years ago

Sounds like it's still pulling in the background, and holding the image lock while doing so. @vrothberg Concur?

This does mean that pull is not terminating on client disconnect, which is definitely a bug.

I cannot rule it out entirely. The (debug) logs of the server side should reveal if that's the case.

dwhiteddsoft commented 3 years ago

How do I get these? Sorry, I am not a podman expert

vrothberg commented 3 years ago

@dwhiteddsoft, you could run podman --log-level=info system service -t0 in a terminal. That will start the server that listens on the socket.

If the image is continued being pulled, you would see the progress bars in this terminal. FWIW, I cannot reproduce and have not yet seen this error.

dwhiteddsoft commented 3 years ago

Strangest thing I have ever seen. I could get info from it but running anything else resulted in the error. Once I ran your command, it cleared up. I have been trying to replicate all day but cannot. I am going to simply close and thank people for their help today. Sorry I could not figure it out but if I run up on it agian, I will make sure to track the steps.

vrothberg commented 3 years ago

Thanks for reaching out, @dwhiteddsoft! I am glad it's working. As you said, if it comes up again, please reach out and we'll take another look.