containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.04k stars 2.35k forks source link

Image ID not consistent between Podman and Docker in image history #21198

Open nwallace83 opened 8 months ago

nwallace83 commented 8 months ago

Issue Description

We are seeing the below error when using podman in Azure DevOps with the built-in Docker@2 task. The docker command is alias'd to podman.

/app/azure/pipeline_tools/docker inspect -f {{.RootFS.Layers}}
Error: no names or ids specified
##[error]Error: no names or ids specified

After investing the issue we found that the task is getting the image ID from docker history but it expects the ID to include sha256: as part of the ID. Podman is not using the same ID format as docker (see below)

$ docker history --format "layerId:{{.ID}}" --no-trunc 511ee6833cb8
...
layerId:sha256:511ee6833cb859d887bb090fe8c948bb268abc10c42f8c5830e25de78a72c9e8
...
$ podman history --format "layerId:{{.ID}}" --no-trunc 511ee6833cb8
...
layerId:511ee6833cb859d887bb090fe8c948bb268abc10c42f8c5830e25de78a72c9e8
...

Steps to reproduce the issue

Steps to reproduce the issue

  1. Run docker history and podman history on the same image
  2. Compare the ID field between the two outputs

Describe the results you received

podman and docker have different ID formats. The difference in the format causes Azure DevOps to use a blank image ID, resulting in errors.

Describe the results you expected

expected: podman to be consistent with docker image ID formats.

podman info output

$ podman version
Client:       Podman Engine
Version:      4.6.1
API Version:  4.6.1
Go Version:   go1.20.10
Built:        Sat Dec  2 08:05:24 2023
OS/Arch:      linux/amd64

$ podman info
host:
  arch: amd64
  buildahVersion: 1.31.3
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: cgroupfs
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.8-1.el9.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.8, commit: aadb7c890ac6283eb4666d92690238e5fbdec5c7'
  cpuUtilization:
    idlePercent: 96
    systemPercent: 1.38
    userPercent: 2.61
  cpus: 8
  databaseBackend: boltdb
  distribution:
    distribution: '"rhel"'
    version: "9.3"
  eventLogger: file
  freeLocks: 2048
  hostname: xxxxxxx
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 1
      size: 999
    - container_id: 1000
      host_id: 1001
      size: 64535
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 1
      size: 999
    - container_id: 1000
      host_id: 1001
      size: 64535
  kernel: 6.2.0-39-generic
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 245280768
  memTotal: 10418790400
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: Unknown
    package: netavark-1.7.0-2.el9_3.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.7.0
  ociRuntime:
    name: crun
    package: crun-1.8.7-1.el9.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.7
      commit: 53a9996ce82d1ee818349bdcc64797a1fa0433c4
      rundir: /tmp/podman-run-1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    path: /tmp/podman-run-1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.1-1.el9.x86_64
    version: |-
      slirp4netns version 1.2.1
      commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 0
  swapTotal: 0
  uptime: 2h 58m 54.00s (Approximately 0.08 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
 xxxxxxx
store:
  configFile: /home/xxxxxxx/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/xxxxxxx/.local/share/containers/storage
  graphRootAllocated: 83423059968
  graphRootUsed: 25232728064
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 1
  runRoot: /tmp/containers-user-1000/containers
  transientStore: false
  volumePath: /home/xxxxxxx/.local/share/containers/storage/volumes
version:
  APIVersion: 4.6.1
  Built: 1701529524
  BuiltTime: Sat Dec  2 08:05:24 2023
  GitCommit: ""
  GoVersion: go1.20.10
  Os: linux
  OsArch: linux/amd64
  Version: 4.6.1

Podman in a container

Yes

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

Additional environment details

No response

Additional information

No response

rhatdan commented 8 months ago

@mheon @Luap99 This is something we looked at before. Is it time with 5.0 to match Docker functionality?

mheon commented 8 months ago

We can discuss at the cabal tomorrow, but I don't see a strong reason not to. We're making similar changes for Docker compat as part of 5.0.

Luap99 commented 8 months ago

No objection for a 5.0 but if we do this then we need to identify all places where this is relevant as it must be done consistently for all the commands/APIs.

mheon commented 8 months ago

Digging in further here - at least for Podman, that's not a SHA256. We're printing image ID, not image digest. From the fact that our output otherwise matches Docker as well, perhaps it's the same for them and the sha256 was mistakenly added? I need to confirm with a Docker install to see what's going on.

mtrmac commented 8 months ago

(The data we are talking about is not a layer ID, but an image ID, AFAICS.)

Image IDs happen to be SHA256 values as well. So, in that sense, whether or not to include the sha256: value is basically cosmetic.

One thing to think about when designing things is that the algorithm, in principle, might not be SHA-256 forever (but pragmatically, switching would be very hard). That impacts both what the output format should be, and how matching can happen.

Also consider https://github.com/containers/image/pull/1980: the image ID in c/storage may be different for “the same” image depending on how it was pulled, precisely because the different pull methods provide security guarantees about different kinds of data, so we must maintain the two separately.

All in all, image IDs can be used to prove image equality but not image inequality.

github-actions[bot] commented 7 months ago

A friendly reminder that this issue had no activity for 30 days.

LarsAC commented 3 months ago

Any workaround to get this to work - other than running podman via bash in the Azure pipeline ?

johnwc commented 1 month ago

I'm running into this issue all of a sudden in a pipeline that has been working non-stop for months. Is there a workaround?

takis-kapas commented 1 month ago

Same issue here with running Podman (through docker alias) in an OpenShist container (as an Azure DevOps Agent), in Azure DevOps YAML Pipelines.

Error: no names or ids specified
      ##[debug]Exit code 125 received from tool '/usr/bin/docker'
      ##[debug]STDIO streams have closed for tool '/usr/bin/docker'
      ##[error]Error: no names or ids specified
      ##[debug]Processed: ##vso[task.issue type=error;source=TaskInternal;correlationId=6f38dde4-97d1-495d-a1e1-226e2fcd0faa;]Error: no names or ids specified
      ##[debug]task result: Failed
      ##[error]Unhandled: The process '/usr/bin/docker' failed with exit code 125
      ##[debug]Processed: ##vso[task.issue type=error;source=TaskInternal;correlationId=6f38dde4-97d1-495d-a1e1-226e2fcd0faa;]Unhandled: The process '/usr/bin/docker' failed with exit code 125
      ##[debug]Processed: ##vso[task.complete result=Failed;]Unhandled: The process '/usr/bin/docker' failed with exit code 125
      ##[error]Error: The process '/usr/bin/docker' failed with exit code 125
createdAt:Error: template: history:1:23: executing "history" at <.CreatedAt>: can't evaluate field CreatedAt in type images.historyReporter
      ##[error]Error: template: history:1:23: executing "history" at <.CreatedAt>: can't evaluate field CreatedAt in type images.historyReporter
      ##[error]Unhandled: The process '/usr/bin/docker' failed with exit code 125
      ##[error]Error: The process '/usr/bin/docker' failed with exit code 125

The later is an know issue with Podman on Ubuntu 22.04 and the ADO Docker@2 Task which was remediated in Podman version 4.x.x, that is why I tried to build a container with the ubuntu:24.04 image.

Can someone look into this please? Podman installed both in ubuntu:22.04 and ubuntu:24.04 has issues, using in relation to Docker in ADO Docker@2 Task.

Different issues but still issues which cause the Azure DevOps Pipeline to fail if running from Docker@2 Pipeline Task. I don't think this is an issue with ADO. I think this is an issue with Podman in relation to Docker.

Please let me know if you need more information for remediation of these issues.

Most of our users are pulling, building, tagging, and pushing images using the ADO Docker@2 Task, so it would be really disruptive id I ask them to start performing this tasks from Bash commands in ADO Bash Tasks.

EDIT 07/31/2024 8.18PM EST: The Azure DevOps Organization Sprint is 241 (2024 July 25) The Azure DevOps Pipeline Docker@2 Task version is 2.243.0. Task on GitHub Repo

rdw4nau commented 1 month ago

We also have started to see this on one of our projects in Azure DevOps today.