containers / buildah

A tool that facilitates building OCI images.
https://buildah.io
Apache License 2.0
7.28k stars 766 forks source link

After podman build, the image still has uncompressed layers even with "--disable-compression" #4185

Closed whoisac closed 1 year ago

whoisac commented 2 years ago

Description

podman version 4.1.1
buildah version 1.26.2 (image-spec 1.0.2-dev, runtime-spec 1.0.2-dev)
skopeo version 1.8.0
runc-1.1.3-2.module+el8.6.0+15917+093ca6f8.x86_64
RHEL 8.6
+ podman build --disable-compression=false --no-cache --format=docker . -t testimage:latest
STEP 1/9: FROM quay.io/operator-framework/ansible-operator:v1.17.0
Trying to pull quay.io/operator-framework/ansible-operator:v1.17.0...
Getting image source signatures
Copying blob 8671113e1c57 done
Copying blob 01e95d01cf4c done
+ podman images --digests
REPOSITORY                                   TAG         DIGEST                                                                   IMAGE ID      CREATED       SIZE
localhost/testimage                          latest      sha256:6a615d09f7ef030e7ecc1bac7d3a180300189d5b8c967b25a3ed91daa3df6940  5133628a0c31  1 second ago  1.05 GB
quay.io/operator-framework/ansible-operator  v1.17.0     sha256:9c00aa222b831fb8cac8db68c830a91bbcf775d3a635d894ecfef17289345eac  8f98398d4227  4 months ago  551 MB
*** skopeo inspect --raw containers-storage:localhost/testimage:latest | jq | grep mediaType
+ skopeo inspect --raw containers-storage:localhost/testimage:latest
+ jq
+ grep mediaType
  "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
    "mediaType": "application/vnd.docker.container.image.v1+json",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
+ echo '*** skopeo inspect containers-storage:localhost/testimage:latest | jq .Digest'
*** skopeo inspect containers-storage:localhost/testimage:latest | jq .Digest
+ skopeo inspect containers-storage:localhost/testimage:latest
+ jq .Digest
"sha256:6a615d09f7ef030e7ecc1bac7d3a180300189d5b8c967b25a3ed91daa3df6940"

All layers are uncompressed, except one. After podman push to an internal artifactory registry.

podman push --format v2s2 --creds KEY docker://internal_artifactory_registry/testimage:latest
*** skopeo inspect --raw --creds KEY docker://internal_artifactory_registry/testimage:latest | jq | grep mediaType
+ skopeo inspect --raw --creds KEY docker://internal_artifactory_registry/testimage:latest
+ jq
+ grep mediaType
  "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
    "mediaType": "application/vnd.docker.container.image.v1+json",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",

All layers are compressed, except one. If pushing to quay.io, it rejects with "manifest invalid" because of uncompressed layer.

Steps to reproduce the issue: 1. 2. 3.

Describe the results you received:

Describe the results you expected:

Output of rpm -q buildah or apt list buildah:

buildah-1.26.2-1.module+el8.6.0+15917+093ca6f8.x86_64

Output of buildah version:

buildah version 1.26.2 (image-spec 1.0.2-dev, runtime-spec 1.0.2-dev)

Output of podman version if reporting a podman build issue:

podman version 4.1.1

*Output of `cat /etc/release`:**

NAME="Red Hat Enterprise Linux"
VERSION="8.6 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.6"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.6 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/red_hat_enterprise_linux/8/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.6
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.6"
Red Hat Enterprise Linux release 8.6 (Ootpa)
Red Hat Enterprise Linux release 8.6 (Ootpa)

Output of uname -a:

Linux zen-podmanx1.fyre.ibm.com 4.18.0-372.19.1.el8_6.x86_64 #1 SMP Mon Jul 18 11:14:02 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux

Output of cat /etc/containers/storage.conf:

# This file is is the configuration file for all tools
# that use the containers/storage library. The storage.conf file
# overrides all other storage.conf files. Container engines using the
# container/storage library do not inherit fields from other storage.conf
# files.
#
#  Note: The storage.conf file overrides other storage.conf files based on this precedence:
#      /usr/containers/storage.conf
#      /etc/containers/storage.conf
#      $HOME/.config/containers/storage.conf
#      $XDG_CONFIG_HOME/containers/storage.conf (If XDG_CONFIG_HOME is set)
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]

# Default Storage Driver, Must be set for proper operation.
driver = "overlay"

# Temporary storage location
runroot = "/run/containers/storage"

# Primary Read/Write location of container storage
# When changing the graphroot location on an SELINUX system, you must
# ensure  the labeling matches the default locations labels with the
# following commands:
# semanage fcontext -a -e /var/lib/containers/storage /NEWSTORAGEPATH
# restorecon -R -v /NEWSTORAGEPATH
graphroot = "/var/lib/containers/storage"

# Storage path for rootless users
#
# rootless_storage_path = "$HOME/.local/share/containers/storage"

[storage.options]
# Storage options to be passed to underlying storage drivers

# AdditionalImageStores is used to pass paths to additional Read/Only image stores
# Must be comma separated list.
additionalimagestores = [
]

# Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of
# a container, to the UIDs/GIDs as they should appear outside of the container,
# and the length of the range of UIDs/GIDs.  Additional mapped sets can be
# listed and will be heeded by libraries, but there are limits to the number of
# mappings which the kernel will allow when you later attempt to run a
# container.
#
# remap-uids = 0:1668442479:65536
# remap-gids = 0:1668442479:65536

# Remap-User/Group is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid or /etc/subgid file.  Mappings are set up starting
# with an in-container ID of 0 and then a host-level ID taken from the lowest
# range that matches the specified name, and using the length of that range.
# Additional ranges are then assigned, using the ranges which specify the
# lowest host-level IDs first, to the lowest not-yet-mapped in-container ID,
# until all of the entries have been used for maps.
#
# remap-user = "containers"
# remap-group = "containers"

# Root-auto-userns-user is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid and /etc/subgid file.  These ranges will be partitioned
# to containers configured to create automatically a user namespace.  Containers
# configured to automatically create a user namespace can still overlap with containers
# having an explicit mapping set.
# This setting is ignored when running as rootless.
# root-auto-userns-user = "storage"
#
# Auto-userns-min-size is the minimum size for a user namespace created automatically.
# auto-userns-min-size=1024
#
# Auto-userns-max-size is the minimum size for a user namespace created automatically.
# auto-userns-max-size=65536

[storage.options.overlay]
# ignore_chown_errors can be set to allow a non privileged user running with
# a single UID within a user namespace to run containers. The user can pull
# and use any image even those with multiple uids.  Note multiple UIDs will be
# squashed down to the default uid in the container.  These images will have no
# separation between the users in the container. Only supported for the overlay
# and vfs drivers.
#ignore_chown_errors = "false"

# Inodes is used to set a maximum inodes of the container image.
# inodes = ""

# Path to an helper program to use for mounting the file system instead of mounting it
# directly.
#mount_program = "/usr/bin/fuse-overlayfs"

# mountopt specifies comma separated list of extra mount options
mountopt = "nodev,metacopy=on"

# Set to skip a PRIVATE bind mount on the storage home directory.
# skip_mount_home = "false"

# Size is used to set a maximum size of the container image.
# size = ""

# ForceMask specifies the permissions mask that is used for new files and
# directories.
#
# The values "shared" and "private" are accepted.
# Octal permission masks are also accepted.
#
#  "": No value specified.
#     All files/directories, get set with the permissions identified within the
#     image.
#  "private": it is equivalent to 0700.
#     All files/directories get set with 0700 permissions.  The owner has rwx
#     access to the files. No other users on the system can access the files.
#     This setting could be used with networked based homedirs.
#  "shared": it is equivalent to 0755.
#     The owner has rwx access to the files and everyone else can read, access
#     and execute them. This setting is useful for sharing containers storage
#     with other users.  For instance have a storage owned by root but shared
#     to rootless users as an additional store.
#     NOTE:  All files within the image are made readable and executable by any
#     user on the system. Even /etc/shadow within your image is now readable by
#     any user.
#
#   OCTAL: Users can experiment with other OCTAL Permissions.
#
#  Note: The force_mask Flag is an experimental feature, it could change in the
#  future.  When "force_mask" is set the original permission mask is stored in
#  the "user.containers.override_stat" xattr and the "mount_program" option must
#  be specified. Mount programs like "/usr/bin/fuse-overlayfs" present the
#  extended attribute permissions to processes within containers rather then the
#  "force_mask"  permissions.
#
# force_mask = ""

[storage.options.thinpool]
# Storage Options for thinpool

# autoextend_percent determines the amount by which pool needs to be
# grown. This is specified in terms of % of pool size. So a value of 20 means
# that when threshold is hit, pool will be grown by 20% of existing
# pool size.
# autoextend_percent = "20"

# autoextend_threshold determines the pool extension threshold in terms
# of percentage of pool size. For example, if threshold is 60, that means when
# pool is 60% full, threshold has been hit.
# autoextend_threshold = "80"

# basesize specifies the size to use when creating the base device, which
# limits the size of images and containers.
# basesize = "10G"

# blocksize specifies a custom blocksize to use for the thin pool.
# blocksize="64k"

# directlvm_device specifies a custom block storage device to use for the
# thin pool. Required if you setup devicemapper.
# directlvm_device = ""

# directlvm_device_force wipes device even if device already has a filesystem.
# directlvm_device_force = "True"

# fs specifies the filesystem type to use for the base device.
# fs="xfs"

# log_level sets the log level of devicemapper.
# 0: LogLevelSuppress 0 (Default)
# 2: LogLevelFatal
# 3: LogLevelErr
# 4: LogLevelWarn
# 5: LogLevelNotice
# 6: LogLevelInfo
# 7: LogLevelDebug
# log_level = "7"

# min_free_space specifies the min free space percent in a thin pool require for
# new device creation to succeed. Valid values are from 0% - 99%.
# Value 0% disables
# min_free_space = "10%"

# mkfsarg specifies extra mkfs arguments to be used when creating the base
# device.
# mkfsarg = ""

# metadata_size is used to set the `pvcreate --metadatasize` options when
# creating thin devices. Default is 128k
# metadata_size = ""

# Size is used to set a maximum size of the container image.
# size = ""

# use_deferred_removal marks devicemapper block device for deferred removal.
# If the thinpool is in use when the driver attempts to remove it, the driver
# tells the kernel to remove it as soon as possible. Note this does not free
# up the disk space, use deferred deletion to fully remove the thinpool.
# use_deferred_removal = "True"

# use_deferred_deletion marks thinpool device for deferred deletion.
# If the device is busy when the driver attempts to delete it, the driver
# will attempt to delete device every 30 seconds until successful.
# If the program using the driver exits, the driver will continue attempting
# to cleanup the next time the driver is used. Deferred deletion permanently
# deletes the device and all data stored in device will be lost.
# use_deferred_deletion = "True"

# xfs_nospace_max_retries specifies the maximum number of retries XFS should
# attempt to complete IO when ENOSPC (no space) error is returned by
# underlying storage device.
# xfs_nospace_max_retries = "0"
[jenkins@zen-podmanx1 test]$ client_loop: send disconnect: Broken pipe
[Mac bin]$ newssh.sh zen-podmanx
Activate the web console with: systemctl enable --now cockpit.socket

Register this system with Red Hat Insights: insights-client --register
Create an account or view all your systems at https://red.ht/insights-dashboard
Last login: Fri Aug 12 15:05:57 2022 from 9.211.73.151
[root@zen-podmanx1 ~]# cat /etc/containers/storage.conf
# This file is is the configuration file for all tools
# that use the containers/storage library. The storage.conf file
# overrides all other storage.conf files. Container engines using the
# container/storage library do not inherit fields from other storage.conf
# files.
#
#  Note: The storage.conf file overrides other storage.conf files based on this precedence:
#      /usr/containers/storage.conf
#      /etc/containers/storage.conf
#      $HOME/.config/containers/storage.conf
#      $XDG_CONFIG_HOME/containers/storage.conf (If XDG_CONFIG_HOME is set)
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]

# Default Storage Driver, Must be set for proper operation.
driver = "overlay"

# Temporary storage location
runroot = "/run/containers/storage"

# Primary Read/Write location of container storage
# When changing the graphroot location on an SELINUX system, you must
# ensure  the labeling matches the default locations labels with the
# following commands:
# semanage fcontext -a -e /var/lib/containers/storage /NEWSTORAGEPATH
# restorecon -R -v /NEWSTORAGEPATH
graphroot = "/var/lib/containers/storage"

# Storage path for rootless users
#
# rootless_storage_path = "$HOME/.local/share/containers/storage"

[storage.options]
# Storage options to be passed to underlying storage drivers

# AdditionalImageStores is used to pass paths to additional Read/Only image stores
# Must be comma separated list.
additionalimagestores = [
]

# Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of
# a container, to the UIDs/GIDs as they should appear outside of the container,
# and the length of the range of UIDs/GIDs.  Additional mapped sets can be
# listed and will be heeded by libraries, but there are limits to the number of
# mappings which the kernel will allow when you later attempt to run a
# container.
#
# remap-uids = 0:1668442479:65536
# remap-gids = 0:1668442479:65536

# Remap-User/Group is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid or /etc/subgid file.  Mappings are set up starting
# with an in-container ID of 0 and then a host-level ID taken from the lowest
# range that matches the specified name, and using the length of that range.
# Additional ranges are then assigned, using the ranges which specify the
# lowest host-level IDs first, to the lowest not-yet-mapped in-container ID,
# until all of the entries have been used for maps.
#
# remap-user = "containers"
# remap-group = "containers"

# Root-auto-userns-user is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid and /etc/subgid file.  These ranges will be partitioned
# to containers configured to create automatically a user namespace.  Containers
# configured to automatically create a user namespace can still overlap with containers
# having an explicit mapping set.
# This setting is ignored when running as rootless.
# root-auto-userns-user = "storage"
#
# Auto-userns-min-size is the minimum size for a user namespace created automatically.
# auto-userns-min-size=1024
#
# Auto-userns-max-size is the minimum size for a user namespace created automatically.
# auto-userns-max-size=65536

[storage.options.overlay]
# ignore_chown_errors can be set to allow a non privileged user running with
# a single UID within a user namespace to run containers. The user can pull
# and use any image even those with multiple uids.  Note multiple UIDs will be
# squashed down to the default uid in the container.  These images will have no
# separation between the users in the container. Only supported for the overlay
# and vfs drivers.
#ignore_chown_errors = "false"

# Inodes is used to set a maximum inodes of the container image.
# inodes = ""

# Path to an helper program to use for mounting the file system instead of mounting it
# directly.
#mount_program = "/usr/bin/fuse-overlayfs"

# mountopt specifies comma separated list of extra mount options
mountopt = "nodev,metacopy=on"

# Set to skip a PRIVATE bind mount on the storage home directory.
# skip_mount_home = "false"

# Size is used to set a maximum size of the container image.
# size = ""

# ForceMask specifies the permissions mask that is used for new files and
# directories.
#
# The values "shared" and "private" are accepted.
# Octal permission masks are also accepted.
#
#  "": No value specified.
#     All files/directories, get set with the permissions identified within the
#     image.
#  "private": it is equivalent to 0700.
#     All files/directories get set with 0700 permissions.  The owner has rwx
#     access to the files. No other users on the system can access the files.
#     This setting could be used with networked based homedirs.
#  "shared": it is equivalent to 0755.
#     The owner has rwx access to the files and everyone else can read, access
#     and execute them. This setting is useful for sharing containers storage
#     with other users.  For instance have a storage owned by root but shared
#     to rootless users as an additional store.
#     NOTE:  All files within the image are made readable and executable by any
#     user on the system. Even /etc/shadow within your image is now readable by
#     any user.
#
#   OCTAL: Users can experiment with other OCTAL Permissions.
#
#  Note: The force_mask Flag is an experimental feature, it could change in the
#  future.  When "force_mask" is set the original permission mask is stored in
#  the "user.containers.override_stat" xattr and the "mount_program" option must
#  be specified. Mount programs like "/usr/bin/fuse-overlayfs" present the
#  extended attribute permissions to processes within containers rather then the
#  "force_mask"  permissions.
#
# force_mask = ""

[storage.options.thinpool]
# Storage Options for thinpool

# autoextend_percent determines the amount by which pool needs to be
# grown. This is specified in terms of % of pool size. So a value of 20 means
# that when threshold is hit, pool will be grown by 20% of existing
# pool size.
# autoextend_percent = "20"

# autoextend_threshold determines the pool extension threshold in terms
# of percentage of pool size. For example, if threshold is 60, that means when
# pool is 60% full, threshold has been hit.
# autoextend_threshold = "80"

# basesize specifies the size to use when creating the base device, which
# limits the size of images and containers.
# basesize = "10G"

# blocksize specifies a custom blocksize to use for the thin pool.
# blocksize="64k"

# directlvm_device specifies a custom block storage device to use for the
# thin pool. Required if you setup devicemapper.
# directlvm_device = ""

# directlvm_device_force wipes device even if device already has a filesystem.
# directlvm_device_force = "True"

# fs specifies the filesystem type to use for the base device.
# fs="xfs"

# log_level sets the log level of devicemapper.
# 0: LogLevelSuppress 0 (Default)
# 2: LogLevelFatal
# 3: LogLevelErr
# 4: LogLevelWarn
# 5: LogLevelNotice
# 6: LogLevelInfo
# 7: LogLevelDebug
# log_level = "7"

# min_free_space specifies the min free space percent in a thin pool require for
# new device creation to succeed. Valid values are from 0% - 99%.
# Value 0% disables
# min_free_space = "10%"

# mkfsarg specifies extra mkfs arguments to be used when creating the base
# device.
# mkfsarg = ""

# metadata_size is used to set the `pvcreate --metadatasize` options when
# creating thin devices. Default is 128k
# metadata_size = ""

# Size is used to set a maximum size of the container image.
# size = ""

# use_deferred_removal marks devicemapper block device for deferred removal.
# If the thinpool is in use when the driver attempts to remove it, the driver
# tells the kernel to remove it as soon as possible. Note this does not free
# up the disk space, use deferred deletion to fully remove the thinpool.
# use_deferred_removal = "True"

# use_deferred_deletion marks thinpool device for deferred deletion.
# If the device is busy when the driver attempts to delete it, the driver
# will attempt to delete device every 30 seconds until successful.
# If the program using the driver exits, the driver will continue attempting
# to cleanup the next time the driver is used. Deferred deletion permanently
# deletes the device and all data stored in device will be lost.
# use_deferred_deletion = "True"

# xfs_nospace_max_retries specifies the maximum number of retries XFS should
# attempt to complete IO when ENOSPC (no space) error is returned by
# underlying storage device.
# xfs_nospace_max_retries = "0"
nalind commented 2 years ago

Layers don't get compressed on disk (their contents are stored in exploded form), and the manifest's description of the layer tends to reflect that. The problem appears to be that the layer didn't get compressed while it was being pushed, though I generally expect layer blobs to get compressed by default. Can you retry the podman push with --log-level=debug and show us what it says? The logic tries try to skip copying/pushing a layer if it notices that the registry already has a copy of that layer, but in older versions it would forget to modify the MIME type for a layer in the manifest that had been compressed/decompressed/recompressed if it skipped pushing that layer. I thought we'd fixed that, though.

whoisac commented 2 years ago

Thanks for explaining the layers remain in uncompressed in local storage. This podman push log:

+ podman push --log-level=debug --format v2s2 --creds KEY internal_registry/testimage:latest
INFO[0000] podman filtering at log level debug
DEBU[0000] Called push.PersistentPreRunE(podman push --log-level=debug --format v2s2 --creds KEY internal_registry/testimage:latest)
DEBU[0000] Merged system config "/usr/share/containers/containers.conf"
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/jenkins/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Overriding run root "/run/user/1000/containers" with "/tmp/podman-run-1000/containers" from database
DEBU[0000] Overriding tmp dir "/run/user/1000/libpod/tmp" with "/tmp/podman-run-1000/libpod/tmp" from database
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/jenkins/.local/share/containers/storage
DEBU[0000] Using run root /tmp/podman-run-1000/containers
DEBU[0000] Using static dir /home/jenkins/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/podman-run-1000/libpod/tmp
DEBU[0000] Using volume path /home/jenkins/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] Not configuring container store
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/runc"
INFO[0000] Setting parallel job count to 13
INFO[0000] podman filtering at log level debug
DEBU[0000] Called push.PersistentPreRunE(podman push --log-level=debug --format v2s2 --creds KEY internal_registry/testimage:latest)
DEBU[0000] Merged system config "/usr/share/containers/containers.conf"
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/jenkins/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Overriding run root "/run/user/1000/containers" with "/tmp/podman-run-1000/containers" from database
DEBU[0000] Overriding tmp dir "/run/user/1000/libpod/tmp" with "/tmp/podman-run-1000/libpod/tmp" from database
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/jenkins/.local/share/containers/storage
DEBU[0000] Using run root /tmp/podman-run-1000/containers
DEBU[0000] Using static dir /home/jenkins/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/podman-run-1000/libpod/tmp
DEBU[0000] Using volume path /home/jenkins/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is not being used
DEBU[0000] Cached value indicated that native-diff is usable
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend file
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/runc"
INFO[0000] Setting parallel job count to 13
DEBU[0000] Looking up image "internal_registry/testimage:latest" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Trying "internal_registry/testimage:latest" ...
DEBU[0000] parsed reference into "[overlay@/home/jenkins/.local/share/containers/storage+/tmp/podman-run-1000/containers]@7ede2eca7fe25085cac0d8ccab726ed83a572e80a92abf3092545b1b403c4e6f"
DEBU[0000] Found image "internal_registry/testimage:latest" as "internal_registry/testimage:latest" in local containers storage
DEBU[0000] Found image "internal_registry/testimage:latest" as "internal_registry/testimage:latest" in local containers storage ([overlay@/home/jenkins/.local/share/containers/storage+/tmp/podman-run-1000/containers]@7ede2eca7fe25085cac0d8ccab726ed83a572e80a92abf3092545b1b403c4e6f)
DEBU[0000] Pushing image internal_registry/testimage:latest to internal_registry/testimage:latest
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] }
DEBU[0000] Copying source image [overlay@/home/jenkins/.local/share/containers/storage+/tmp/podman-run-1000/containers]@7ede2eca7fe25085cac0d8ccab726ed83a572e80a92abf3092545b1b403c4e6f to destination image //internal_registry/testimage:latest
DEBU[0000] Returning credentials for internal_registry/testimage from DockerAuthConfig
DEBU[0000] Using registries.d directory /etc/containers/registries.d for sigstore configuration
DEBU[0000]  Using "default-docker" configuration
DEBU[0000]   Using file:///var/lib/containers/sigstore
DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/internal_registry
DEBU[0000] Loading registries configuration "/home/jenkins/.config/containers/registries.conf"
DEBU[0000] Using blob info cache at /home/jenkins/.local/share/containers/cache/blob-info-cache-v1.boltdb
DEBU[0000] IsRunningImageAllowed for image containers-storage:[overlay@/home/jenkins/.local/share/containers/storage]@7ede2eca7fe25085cac0d8ccab726ed83a572e80a92abf3092545b1b403c4e6f
DEBU[0000]  Using default policy section
DEBU[0000]  Requirement 0: allowed
DEBU[0000] Overall: allowed
Getting image source signatures
DEBU[0000] Manifest has MIME type application/vnd.docker.distribution.manifest.v2+json, ordered candidate list [application/vnd.docker.distribution.manifest.v2+json]
DEBU[0000] ... will first try using the original manifest unmodified
DEBU[0000] Checking /v2/testimage/blobs/sha256:a9820c2af00a34f160836f6ef2044d88e6019ca19b3c15ec22f34afe9d73f41c
DEBU[0000] GET https://internal_registry/v2/
DEBU[0000] Checking /v2/testimage/blobs/sha256:3d5ecee9360ea8711f32d2af0cab1eae4d53140496f961ca1a634b5e2e817412
DEBU[0000] Checking /v2/testimage/blobs/sha256:d4f0e88b20938ed69d29fce5ec4382f51e799f982c8d55fa9e827be0ef6ccaf5
DEBU[0000] Checking /v2/testimage/blobs/sha256:a733283f3af5dc4e0ccaf080ab94c3b3354cea1522085478fef52bdb238ff4b1
DEBU[0000] Checking /v2/testimage/blobs/sha256:f8afc1308a31d57af185d198c3adec16a132291ebab160cd50abcbde78407c62
DEBU[0000] Checking /v2/testimage/blobs/sha256:d3ba1354d62016337c31daf46c6a1e0b6f3f24b716705acf28768b8ddf0efe11
DEBU[0000] Ping https://internal_registry/v2/ status 401
DEBU[0000] GET https://internal_registry/artifactory/api/docker/internal-docker-local/v2/token?account=andrew.chum%40xxx.com&scope=repository%3Atestimage%3Apull%2Cpush&service=internal_registry
DEBU[0000] GET https://internal_registry/artifactory/api/docker/internal-docker-local/v2/token?account=andrew.chum%40xxx.com&scope=repository%3Atestimage%3Apull%2Cpush&service=internal_registry
DEBU[0000] GET https://internal_registry/artifactory/api/docker/internal-docker-local/v2/token?account=andrew.chum%40xxx.com&scope=repository%3Atestimage%3Apull%2Cpush&service=internal_registry
DEBU[0000] GET https://internal_registry/artifactory/api/docker/internal-docker-local/v2/token?account=andrew.chum%40xxx.com&scope=repository%3Atestimage%3Apull%2Cpush&service=internal_registry
DEBU[0000] GET https://internal_registry/artifactory/api/docker/internal-docker-local/v2/token?account=andrew.chum%40xxx.com&scope=repository%3Atestimage%3Apull%2Cpush&service=internal_registry
DEBU[0000] GET https://internal_registry/artifactory/api/docker/internal-docker-local/v2/token?account=andrew.chum%40xxx.com&scope=repository%3Atestimage%3Apull%2Cpush&service=internal_registry
DEBU[0000] HEAD https://internal_registry/v2/testimage/blobs/sha256:d3ba1354d62016337c31daf46c6a1e0b6f3f24b716705acf28768b8ddf0efe11
DEBU[0000] HEAD https://internal_registry/v2/testimage/blobs/sha256:a9820c2af00a34f160836f6ef2044d88e6019ca19b3c15ec22f34afe9d73f41c
DEBU[0000] HEAD https://internal_registry/v2/testimage/blobs/sha256:f8afc1308a31d57af185d198c3adec16a132291ebab160cd50abcbde78407c62
DEBU[0000] HEAD https://internal_registry/v2/testimage/blobs/sha256:3d5ecee9360ea8711f32d2af0cab1eae4d53140496f961ca1a634b5e2e817412
DEBU[0000] ... not present
DEBU[0000] Trying to reuse cached location sha256:4c63d9670de300356f7ab478e97790659bf3137f055c8a0125c769c774aa7354 compressed with gzip in internal_registry/testimage
DEBU[0000] Checking /v2/testimage/blobs/sha256:4c63d9670de300356f7ab478e97790659bf3137f055c8a0125c769c774aa7354
DEBU[0000] GET https://internal_registry/artifactory/api/docker/internal-docker-local/v2/token?account=andrew.chum%40xxx.com&scope=repository%3Atestimage%3Apull%2Cpush&scope=repository%3Atestimage%3Apull&service=internal_registry
DEBU[0002] ... not present
DEBU[0002] Trying to reuse cached location sha256:311e36338b5936256af11150cf5cd65b0f03fe091a743f883155bee788995082 compressed with gzip in internal_registry/testimage
DEBU[0002] Checking /v2/testimage/blobs/sha256:311e36338b5936256af11150cf5cd65b0f03fe091a743f883155bee788995082
DEBU[0002] GET https://internal_registry/artifactory/api/docker/internal-docker-local/v2/token?account=andrew.chum%40xxx.com&scope=repository%3Atestimage%3Apull%2Cpush&scope=repository%3Atestimage%3Apull&service=internal_registry
DEBU[0002] ... not present
DEBU[0002] Trying to reuse cached location sha256:373bdd6964713c129bba2b5781f5dfe34a2bb42132c2c64ddbc26f6b81eb2b4e compressed with gzip in internal_registry/testimage
DEBU[0002] Checking /v2/testimage/blobs/sha256:373bdd6964713c129bba2b5781f5dfe34a2bb42132c2c64ddbc26f6b81eb2b4e
DEBU[0002] GET https://internal_registry/artifactory/api/docker/internal-docker-local/v2/token?account=andrew.chum%40xxx.com&scope=repository%3Atestimage%3Apull%2Cpush&scope=repository%3Atestimage%3Apull&service=internal_registry
DEBU[0002] HEAD https://internal_registry/v2/testimage/blobs/sha256:d4f0e88b20938ed69d29fce5ec4382f51e799f982c8d55fa9e827be0ef6ccaf5
DEBU[0002] HEAD https://internal_registry/v2/testimage/blobs/sha256:a733283f3af5dc4e0ccaf080ab94c3b3354cea1522085478fef52bdb238ff4b1
DEBU[0002] HEAD https://internal_registry/v2/testimage/blobs/sha256:311e36338b5936256af11150cf5cd65b0f03fe091a743f883155bee788995082
DEBU[0002] HEAD https://internal_registry/v2/testimage/blobs/sha256:373bdd6964713c129bba2b5781f5dfe34a2bb42132c2c64ddbc26f6b81eb2b4e
DEBU[0002] ... not present
DEBU[0002] Trying to reuse cached location sha256:7f90fadf1b0d06a2b1f4a641760ecb121dc6a0d02f9422df2302201233e38b29 compressed with gzip in internal_registry/testimage
DEBU[0002] Checking /v2/testimage/blobs/sha256:7f90fadf1b0d06a2b1f4a641760ecb121dc6a0d02f9422df2302201233e38b29
DEBU[0002] HEAD https://internal_registry/v2/testimage/blobs/sha256:7f90fadf1b0d06a2b1f4a641760ecb121dc6a0d02f9422df2302201233e38b29
DEBU[0002] ... not present
DEBU[0002] Trying to reuse cached location sha256:3e6a35d6262cd6d06edc97c7259ec7d5928449685509776d212877a58deb79bb compressed with gzip in internal_registry/testimage
DEBU[0002] Checking /v2/testimage/blobs/sha256:3e6a35d6262cd6d06edc97c7259ec7d5928449685509776d212877a58deb79bb
DEBU[0002] HEAD https://internal_registry/v2/testimage/blobs/sha256:3e6a35d6262cd6d06edc97c7259ec7d5928449685509776d212877a58deb79bb
DEBU[0002] ... not present
DEBU[0002] Trying to reuse cached location sha256:d43d4ddd072e4276fdb28a609b56c77603132775e8488170451595c6c394dbc8 compressed with gzip in internal_registry/testimage
DEBU[0002] Checking /v2/testimage/blobs/sha256:d43d4ddd072e4276fdb28a609b56c77603132775e8488170451595c6c394dbc8
DEBU[0002] HEAD https://internal_registry/v2/testimage/blobs/sha256:d43d4ddd072e4276fdb28a609b56c77603132775e8488170451595c6c394dbc8
DEBU[0002] ... already exists
DEBU[0002] Skipping blob sha256:d4f0e88b20938ed69d29fce5ec4382f51e799f982c8d55fa9e827be0ef6ccaf5 (already present):
Copying blob 7f90fadf1b0d skipped: already exists
DEBU[0002] Checking /v2/testimage/blobs/sha256:0330cf831ddc04713ef8803fbc5d8bfadc4064a100297a2b7fb1cab53e573b7f
DEBU[0002] HEAD https://internal_registry/v2/testimage/blobs/sha256:0330cf831ddc04713ef8803fbc5d8bfadc4064a100297a2b7fb1cab53e573b7f
DEBU[0002] ... already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
DEBU[0002] HEAD https://internal_registry/v2/testimage/blobs/sha256:4c63d9670de300356f7ab478e97790659bf3137f055c8a0125c769c774aa7354
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
DEBU[0002] Checking /v2/testimage/blobs/sha256:76394860210a7d100adbbef6790fe8ceade4181925eb8f87e5582608e96d2252
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob d43d4ddd072e skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob d43d4ddd072e skipped: already exists
Copying blob 5f70bf18a086 skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob d43d4ddd072e skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob d43d4ddd072e skipped: already exists
Copying blob 5f70bf18a086 skipped: already exists
Copying blob d738238a10b6 [--------------------------------------] 0.0b / 461.1MiB
Copying blob 311e36338b59 skipped: already exists
DEBU[0004] ... not present
DEBU[0004] exporting filesystem layer "5cb1805a50a7bd1ebd389e559799e83d3ed688da70042d96b184b4d3013d0858" without compression for blob "sha256:8de52563936963f33744cbb33003387cea688063c08cc6a2ecad65899587e182"
DEBU[0004] No compression detected
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob d43d4ddd072e skipped: already exists
Copying blob 5f70bf18a086 skipped: already exists
Copying blob d738238a10b6 [--------------------------------------] 0.0b / 461.1MiB
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob d43d4ddd072e skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob d43d4ddd072e skipped: already exists
Copying blob 5f70bf18a086 skipped: already exists
Copying blob d738238a10b6 [--------------------------------------] 0.0b / 461.1MiB
Copying blob 311e36338b59 skipped: already exists
Copying blob 8de525639369 done
Copying blob 07b17deed702 [--------------------------------------] 8.0b / 12.7MiB
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob d43d4ddd072e skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob d43d4ddd072e skipped: already exists
Copying blob 5f70bf18a086 skipped: already exists
Copying blob d738238a10b6 [--------------------------------------] 0.0b / 461.1MiB
Copying blob 311e36338b59 skipped: already exists
Copying blob 8de525639369 done
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob d43d4ddd072e skipped: already exists
Copying blob 5f70bf18a086 skipped: already exists
Copying blob d738238a10b6 [--------------------------------------] 8.0b / 461.1MiB
Copying blob 311e36338b59 skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob d43d4ddd072e skipped: already exists
Copying blob 5f70bf18a086 skipped: already exists
Copying blob d738238a10b6 [--------------------------------------] 8.0b / 461.1MiB
Copying blob 311e36338b59 skipped: already exists
Copying blob 8de525639369 done
Copying blob 07b17deed702 done
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob d43d4ddd072e skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob d43d4ddd072e skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 7f90fadf1b0d skipped: already exists
Copying blob 373bdd696471 skipped: already exists
Copying blob 3e6a35d6262c skipped: already exists
Copying blob d43d4ddd072e skipped: already exists
Copying blob 5f70bf18a086 skipped: already exists
Copying blob d738238a10b6 done
Copying blob 311e36338b59 skipped: already exists
Copying blob 8de525639369 done
Copying blob 07b17deed702 done
Copying blob 4c63d9670de3 skipped: already exists
Copying blob 82cffe18899f done
Copying blob 6586d341a7e1 skipped: already exists
Copying blob 6c4c71e4ead6 done
Copying blob a6dc5b9b681c done
Copying blob 2184733c418e skipped: already exists
DEBU[0012] exporting opaque data as blob "sha256:7ede2eca7fe25085cac0d8ccab726ed83a572e80a92abf3092545b1b403c4e6f"
DEBU[0012] No compression detected
DEBU[0012] Using original blob without modification
DEBU[0012] Checking /v2/testimage/blobs/sha256:7ede2eca7fe25085cac0d8ccab726ed83a572e80a92abf3092545b1b403c4e6f
DEBU[0012] HEAD https://internal_registry/v2/testimage/blobs/sha256:7ede2eca7fe25085cac0d8ccab726ed83a572e80a92abf3092545b1b403c4e6f
Copying config 7ede2eca7f [--------------------------------------] 8.0b / 10.8KiB
DEBU[0013] ... not present
DEBU[0013] Uploading /v2/testimage/blobs/uploads/
Copying config 7ede2eca7f [--------------------------------------] 8.0b / 10.8KiB
Copying config 7ede2eca7f done
DEBU[0013] PUT https://internal_registry/v2/internal-docker-local/testimage/blobs/uploads/4395e7f8-fac2-41b1-945b-33776a741111?digest=sha256%3A7ede2eca7fe25085cac0d8ccab726ed83
Copying config 7ede2eca7f done
Copying config 7ede2eca7f done
Writing manifest to image destination
DEBU[0013] PUT https://internal_registry/v2/testimage/manifests/latest
Storing signatures
DEBU[0014] Called push.PersistentPostRunE(podman push --log-level=debug --format v2s2 --creds KEY internal_registry/testimage:latest)

This is the inspection:

+ echo '*** skopeo inspect --raw --creds KEY docker://internal_registry/testimage:latest | jq | grep mediaType'
*** skopeo inspect --raw --creds KEY docker://internal_registry/testimage:latest | jq | grep mediaType
+ skopeo inspect --raw --creds KEY docker://internal_registry/testimage:latest
+ jq
+ grep mediaType
  "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
    "mediaType": "application/vnd.docker.container.image.v1+json",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
+ echo '*** skopeo inspect --creds KEY docker://internal_registry/testimage:latest | jq .Digest'
*** skopeo inspect --creds KEY docker://internal_registry/testimage:latest | jq .Digest
+ jq .Digest
+ skopeo inspect --creds KEY docker://internal_registry/testimage:latest
"sha256:813f9435164cdd6b4c9f1cb12df6d1ff2f59f9e6ad37d7e30d2cbb5cfcd7bc43"

From what you said, the internal registry could already have the uncompressed layer which caused the podman push not to compress. However, we have no control on the destination registry. May be an old version of podman pushed there. Can the code be changed so that it always compress layers during pushing?

github-actions[bot] commented 1 year ago

A friendly reminder that this issue had no activity for 30 days.

whoisac commented 1 year ago

Anyone has any idea?

rhatdan commented 1 year ago

@nalind @vrothberg @mtrmac Could you answer @whoisac question.

mtrmac commented 1 year ago

The inspect output is filtered out not to contain digests, which makes it difficult to tell for sure what is going on here.

In principle, if an uncompressed version exists on the registry, the push is going to use it instead of compressing. The copy code tries fairly hard to avoid copies, and it does not treat uncompressed data specially in that regard.

Currently there is no reasonable way to override that behavior. (An unreasonable way might be to push with encryption, and then decrypt the image, possibly to a different repo, but I didn’t test that.)

As a workaround right now, I think running a local registry with no previous state or images in a container, pushing to that registry, and skopeo copy to the final destination, would work.

whoisac commented 1 year ago

@mtrmac , is it possible remove that "uncompressed version" in the registry? How would the uncompressed layer exist in the first place in ? As I understand it now, "podman push" will compress first and push if the destination registry doesn't have that layer cached. Is it correct?

mtrmac commented 1 year ago

is it possible remove that "uncompressed version" in the registry?

It might be possible ( https://github.com/distribution/distribution/blob/main/docs/spec/api.md#deleting-a-layer ) but I don’t know of any software that directly exposes this.

Maybe delete all images that refer to that blob, and run a registry’s garbage collection process, if any.

How would the uncompressed layer exist in the first place

An explicit request (e.g. skopeo copy --preserve-digests) or maybe an old bug. I agree it should not usually happen.

github-actions[bot] commented 1 year ago

A friendly reminder that this issue had no activity for 30 days.

rhatdan commented 1 year ago

Since this does not seem to be a buildah issue, I am closing. Feel free to continue the conversation here.