Closed what-next-github closed 2 years ago
@siretart ping, any idea when ubuntu lts will get v3.4.7? There were quite a few cves that were patched between 3.4.4 and 3.4.7.
@what-next-github i'd recommend filing a bug on ubuntu's issue tracker as the ubuntu package update issue can't be solved here.
Closing ..
Or better yet 4.0.3
Or better yet 4.0.3
that likely won't happen in LTS, but may be doable in ubuntu-backports.
@siretart ping, any idea when ubuntu lts will get v3.4.7? There were quite a few cves that were patched between 3.4.4 and 3.4.7.
The release cycle for Kinetic (Ubuntu 22.10) has just opened, and most of the packages should get synced automatically soon. As soon as that is done, I can merge the 3.4.7 package from Debian bookwork. I expect that to happen maybe next week.
Or better yet 4.0.3
Would you advise to run 4.0.3 without netavark? -- I'm still looking at what it would take to package that for debian before uploading it to 'unstable' (which is a precondition for uploading it to Ubuntu). The thing is, I am not really familiar with packaging Rust software, and I encountered a number of missing crates missing which need to be packaged first. Other than that, the podman 4.0.3 package is already in debian/experimental, just without netavark. I guess Ubuntu users could just install it from there if they really wanted to?
@siretart v4.0 is functional with the existing cni-plugins. So, the absence of netavark shouldn't block shipping v4 on debian/ubuntu.
/cc @Luap99 @flouthoc @mheon @baude
I guess Ubuntu users could just install it from there if they really wanted to?
Does that usually work? No cross-distro / cross-version issues?
Does that usually work? No cross-distro / cross-version issues?
For statically linked applications such as golang programs, it usually works. You might run into trouble with dependent packages, such as older versions of the containers/storage packages with outdated versions of storage.conf
and similar, but experimental and ubuntu 20.04 are pretty similar, so I'd expect that to work.
Is this issue resolved ? I tried installing podman on version 22.04 and still see the old version 3.4.4.
@ivishalgandhi let's please have this conversation in the distribution's bugtracker at https://bugs.launchpad.net/ubuntu/+source/libpod/+bug/1971034
Can I ask the recommended way to get 4.x.x in Ubuntu 22.04 please? Looking at https://podman.io/getting-started/installation - the only thing it mentions is that the kubic packages have been discontinued, but not how to get the latest?
Is it just a case of building from source? And if so, are you saying that's unsupported right now? Sorry if I've missed anything in the docs.
Maybe using homebrew is a good option? I have the same problem and I will try it.
@liebig Sadly that's not an option, the linux version of podman on homebrew only installs podman-remote, it doesn't install full podman: https://github.com/orgs/Homebrew/discussions/3091
The only way I've found to get podman 4.x on Ubuntu is to build it from source.
I'm maintaining the packaging branch for Debian on salsa: https://salsa.debian.org/debian/libpod/-/tree/debian/experimental
You should be able to install the .deb
files from Debian experimental in Ubuntu. Let me know how that goes.
@nozzlegear Thanks for the info about homebrew. You saved me some work time. @siretart Where can I find the deb file for Podman 4.1.0 in your repository?
Okay I have now created a fork and have the Linux AMD64 binaries (releases and nightly builds) built here automatically:
Maybe I can help someone with this.
@liebig the homebrew attempts ended up somewhere here: (for 3.4.2)
It is the same podman version that we have for Ubuntu 20.04, as well.
https://build.opensuse.org/project/show/devel:kubic:libcontainers:stable
@liebig
Okay I have now created a fork and have the Linux AMD64 binaries (releases and nightly builds) built here automatically:
Could you add installation instructions to your fork?
I too am struggling to use podman
on Ubuntu as we also need to use Docker Compose v2 (https://github.com/containers/podman/issues/11780#issuecomment-1178118714), but that requires podman 4.1+.
For those investigating short term workarounds, I tried a daily Ubuntu 22.10 install and installed podman
there, but as of writing it is still on Podman 3.4.4 as well.
Could you add installation instructions to your fork?
@johnthagen What exactly do you want to know? You can download from my fork either a built release or the daily build, unpack and run it. The folder structure of the packed release corresponds to the Linux folder structure and should be unpacked into the same folder (/usr). I can't say much more about this.
@liebig Even having that paragraph in the README would be helpful, thanks!
I was sort of expecting .deb
files and when I saw it was a folder structure, I wasn't totally sure what the recommended instructions were.
@johnthagen All right, I will add that to the README. Thanks for your feedback.
@siretart
You should be able to install the .deb files from Debian experimental in Ubuntu. Let me know how that goes.
I ran the following:
wget http://ftp.us.debian.org/debian/pool/main/libp/libpod/podman_4.2.0+ds1-3_amd64.deb
sudo dpkg -i podman_4.2.0+ds1-3_amd64.deb
Preparing to unpack podman_4.2.0+ds1-3_amd64.deb ...
Unpacking podman (4.2.0+ds1-3) over (3.4.4+ds1-1ubuntu1) ...
Setting up podman (4.2.0+ds1-3) ...
Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142.
Processing triggers for man-db (2.10.2-1) ...
I built and ran a container from Macos with podman-remote
, and it works fine so far.
I did run into
ERRO[0000] User-selected graph driver "vfs" overwritten by graph driver "overlay" from database - delete libpod local files to resolve. May prevent use of images created by other tools
so I simply recreated my containers. If someone is upgrading from 3.4.4, you may want to export your containers or something.
Any updates about how we can continuously install updates on Ubuntu? Thanks in advance.
@siretart thank you, the package is working fine. If you upgrade from 3 to 4.2, you usually run into an error. I could fix it by removing my old container storage. In the same step, I also enabled the more performant native overlay driver. By default, vfs was installed.
podman system reset -f
sudo rm -rf ~/.local/share/containers/
~/.config/containers/storage.conf
# This file is the configuration file for all tools
# that use the containers/storage library. The storage.conf file
# overrides all other storage.conf files. Container engines using the
# container/storage library do not inherit fields from other storage.conf
# files.
#
# Note: The storage.conf file overrides other storage.conf files based on this precedence:
# /usr/containers/storage.conf
# /etc/containers/storage.conf
# $HOME/.config/containers/storage.conf
# $XDG_CONFIG_HOME/containers/storage.conf (If XDG_CONFIG_HOME is set)
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]
# Default Storage Driver, Must be set for proper operation.
driver = "overlay"
# Temporary storage location
runroot = "/run/user/1000/containers"
# Primary Read/Write location of container storage
# When changing the graphroot location on an SELINUX system, you must
# ensure the labeling matches the default locations labels with the
# following commands:
# semanage fcontext -a -e /var/lib/containers/storage /NEWSTORAGEPATH
# restorecon -R -v /NEWSTORAGEPATH
#
# DON'T change see https://github.com/containers/buildah/issues/4196
graphroot = "$HOME/.local/share/containers/storage"
# Storage path for rootless users
#
# rootless_storage_path = "$HOME/.local/share/containers/storage"
[storage.options]
# Storage options to be passed to underlying storage drivers
# AdditionalImageStores is used to pass paths to additional Read/Only image stores
# Must be comma separated list.
#additionalimagestores = [
# "/var/lib/containers/share"
#]
# Allows specification of how storage is populated when pulling images. This
# option can speed the pulling process of images compressed with format
# zstd:chunked. Containers/storage looks for files within images that are being
# pulled from a container registry that were previously pulled to the host. It
# can copy or create a hard link to the existing file when it finds them,
# eliminating the need to pull them from the container registry. These options
# can deduplicate pulling of content, disk storage of content and can allow the
# kernel to use less memory when running containers.
# containers/storage supports four keys
# * enable_partial_images="true" | "false"
# Tells containers/storage to look for files previously pulled in storage
# rather then always pulling them from the container registry.
# * use_hard_links = "false" | "true"
# Tells containers/storage to use hard links rather then create new files in
# the image, if an identical file already existed in storage.
# * ostree_repos = ""
# Tells containers/storage where an ostree repository exists that might have
# previously pulled content which can be used when attempting to avoid
# pulling content from the container registry
#
# Enable "enable_partial_images". See https://www.redhat.com/sysadmin/faster-container-image-pulls
pull_options = {enable_partial_images = "true", use_hard_links = "false", ostree_repos=""}
# Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of
# a container, to the UIDs/GIDs as they should appear outside of the container,
# and the length of the range of UIDs/GIDs. Additional mapped sets can be
# listed and will be needed by libraries, but there are limits to the number of
# mappings which the kernel will allow when you later attempt to run a
# container.
#
# remap-uids = 0:1668442479:65536
# remap-gids = 0:1668442479:65536
# Remap-User/Group is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid or /etc/subgid file. Mappings are set up starting
# with an in-container ID of 0 and then a host-level ID taken from the lowest
# range that matches the specified name, and using the length of that range.
# Additional ranges are then assigned, using the ranges which specify the
# lowest host-level IDs first, to the lowest not-yet-mapped in-container ID,
# until all of the entries have been used for maps.
#
# remap-user = "containers"
# remap-group = "containers"
# Root-auto-userns-user is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid and /etc/subgid file. These ranges will be partitioned
# to containers configured to create automatically a user namespace. Containers
# configured to automatically create a user namespace can still overlap with containers
# having an explicit mapping set.
# This setting is ignored when running as rootless.
# root-auto-userns-user = "storage"
#
# Auto-userns-min-size is the minimum size for a user namespace created automatically.
# auto-userns-min-size=1024
#
# Auto-userns-max-size is the minimum size for a user namespace created automatically.
# auto-userns-max-size=65536
[storage.options.overlay]
# ignore_chown_errors can be set to allow a non privileged user running with
# a single UID within a user namespace to run containers. The user can pull
# and use any image even those with multiple uids. Note multiple UIDs will be
# squashed down to the default uid in the container. These images will have no
# separation between the users in the container. Only supported for the overlay
# and vfs drivers.
#ignore_chown_errors = "false"
# Inodes is used to set a maximum inodes of the container image.
# inodes = ""
# Path to an helper program to use for mounting the file system instead of mounting it
# directly.
mount_program = ""
# mountopt specifies comma separated list of extra mount options
# TODO: metacopy is not supported on all Kernel versions. Reevaluate this option next time.
mountopt = "nodev"
# Set to skip a PRIVATE bind mount on the storage home directory.
# skip_mount_home = "false"
# Size is used to set a maximum size of the container image.
# size = ""
# ForceMask specifies the permissions mask that is used for new files and
# directories.
#
# The values "shared" and "private" are accepted.
# Octal permission masks are also accepted.
#
# "": No value specified.
# All files/directories, get set with the permissions identified within the
# image.
# "private": it is equivalent to 0700.
# All files/directories get set with 0700 permissions. The owner has rwx
# access to the files. No other users on the system can access the files.
# This setting could be used with networked based homedirs.
# "shared": it is equivalent to 0755.
# The owner has rwx access to the files and everyone else can read, access
# and execute them. This setting is useful for sharing containers storage
# with other users. For instance have a storage owned by root but shared
# to rootless users as an additional store.
# NOTE: All files within the image are made readable and executable by any
# user on the system. Even /etc/shadow within your image is now readable by
# any user.
#
# OCTAL: Users can experiment with other OCTAL Permissions.
#
# Note: The force_mask Flag is an experimental feature, it could change in the
# future. When "force_mask" is set the original permission mask is stored in
# the "user.containers.override_stat" xattr and the "mount_program" option must
# be specified. Mount programs like "/usr/bin/fuse-overlayfs" present the
# extended attribute permissions to processes within containers rather than the
# "force_mask" permissions.
#
# force_mask = ""
[storage.options.thinpool]
# Storage Options for thinpool
# autoextend_percent determines the amount by which pool needs to be
# grown. This is specified in terms of % of pool size. So a value of 20 means
# that when threshold is hit, pool will be grown by 20% of existing
# pool size.
# autoextend_percent = "20"
# autoextend_threshold determines the pool extension threshold in terms
# of percentage of pool size. For example, if threshold is 60, that means when
# pool is 60% full, threshold has been hit.
# autoextend_threshold = "80"
# basesize specifies the size to use when creating the base device, which
# limits the size of images and containers.
# basesize = "10G"
# blocksize specifies a custom blocksize to use for the thin pool.
# blocksize="64k"
# directlvm_device specifies a custom block storage device to use for the
# thin pool. Required if you setup devicemapper.
# directlvm_device = ""
# directlvm_device_force wipes device even if device already has a filesystem.
# directlvm_device_force = "True"
# fs specifies the filesystem type to use for the base device.
# fs="xfs"
# log_level sets the log level of devicemapper.
# 0: LogLevelSuppress 0 (Default)
# 2: LogLevelFatal
# 3: LogLevelErr
# 4: LogLevelWarn
# 5: LogLevelNotice
# 6: LogLevelInfo
# 7: LogLevelDebug
# log_level = "7"
# min_free_space specifies the min free space percent in a thin pool require for
# new device creation to succeed. Valid values are from 0% - 99%.
# Value 0% disables
# min_free_space = "10%"
# mkfsarg specifies extra mkfs arguments to be used when creating the base
# device.
# mkfsarg = ""
# metadata_size is used to set the `pvcreate --metadatasize` options when
# creating thin devices. Default is 128k
# metadata_size = ""
# Size is used to set a maximum size of the container image.
# size = ""
# use_deferred_removal marks devicemapper block device for deferred removal.
# If the thinpool is in use when the driver attempts to remove it, the driver
# tells the kernel to remove it as soon as possible. Note this does not free
# up the disk space, use deferred deletion to fully remove the thinpool.
# use_deferred_removal = "True"
# use_deferred_deletion marks thinpool device for deferred deletion.
# If the device is busy when the driver attempts to delete it, the driver
# will attempt to delete device every 30 seconds until successful.
# If the program using the driver exits, the driver will continue attempting
# to cleanup the next time the driver is used. Deferred deletion permanently
# deletes the device and all data stored in device will be lost.
# use_deferred_deletion = "True"
# xfs_nospace_max_retries specifies the maximum number of retries XFS should
# attempt to complete IO when ENOSPC (no space) error is returned by
# underlying storage device.
# xfs_nospace_max_retries = "0"
After that, you should see podman info
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev
graphRoot: /home/starptech/.local/share/containers/storage
graphRootAllocated: 482134495232
graphRootUsed: 204488548352
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
The overlay driver should be available for the older versions too, but might require installing fuse-overlayfs
manually (for rootless, depending on kernel version)
I'm kind of unfamiliar with make and Linux packaging.
I successfully built the Podman source code inside a debian-bulleye container (make BUILDTAGS="selinux seccomp"
).
I would like to take the build assets/binaries and either:
I know that there are other configurations done by running make install PREFIX=/usr
so it requires additional work other than copy the podman binary, etc. (man, systemd, other ...).
Thanks!
I have a PR open to update the podman install docs to get packages from the Kubic unstable repo for non-prod environments. https://github.com/containers/podman.io/pull/552/files
@siretart is the debian experimental repo suggestion something that can be added to the podman installation docs?
EDIT: is that also something you would suggest to prod users? I suspect not, but I'd like to confirm with you :smile:
I'm not sure.
I'm really close to uploading podman 4.2 to unstable, and that won't be necessary then anymore. I'd rather suggest to help with testing/upgrading to expedite that task.
On Wed, Sep 7, 2022 at 4:14 PM Lokesh Mandvekar @.***> wrote:
@siretart https://github.com/siretart is the debian experimental repo suggestion something that can be added to the podman installation docs?
— Reply to this email directly, view it on GitHub https://github.com/containers/podman/issues/14065#issuecomment-1239826327, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOKTJG6ADWSUDM5Y3T7YU3V5DZTXANCNFSM5UXF7LWA . You are receiving this because you were mentioned.Message ID: @.***>
-- regards, Reinhard
@siretart thanks for you work on this, any way we could help?
Now 22.10 (kinetic
) is out, with podman
3.4.4 :disappointed:
What is strange is that Debian testing
(as well as unstable
) had 3.4.7. Were the Ubuntu packages not even synced from there?
Debian experimental
does have 4.2.0 - that's also slightly out of date now though.
And, it looks like podman
3.4.4 doesn't work on 22.10. It used to work on 22.04 LTS.
$ podman run -it --rm haskell:slim
Trying to pull docker.io/library/haskell:slim...
Getting image source signatures
Copying blob 4500a762c546 done
Copying blob 5f8892229b17 done
Copying blob fbaa2db0a9a5 done
Copying blob 06176de7775b done
Copying blob 470fbde62642 done
Copying config a8cb79a4af done
Writing manifest to image destination
Storing signatures
Error: OCI runtime error: runc create failed: unable to start container process: can't get final child's PID from pipe: EOF
Related Ubuntu bug
Now 22.10 (
kinetic
) is out, withpodman
3.4.4 disappointedWhat is strange is that Debian
testing
(as well asunstable
) had 3.4.7. Were the Ubuntu packages not even synced from there? Debianexperimental
does have 4.2.0 - that's also slightly out of date now though.And, it looks like
podman
3.4.4 doesn't work on 22.10. It used to work on 22.04 LTS.$ podman run -it --rm haskell:slim Trying to pull docker.io/library/haskell:slim... Getting image source signatures Copying blob 4500a762c546 done Copying blob 5f8892229b17 done Copying blob fbaa2db0a9a5 done Copying blob 06176de7775b done Copying blob 470fbde62642 done Copying config a8cb79a4af done Writing manifest to image destination Storing signatures Error: OCI runtime error: runc create failed: unable to start container process: can't get final child's PID from pipe: EOF
Related Ubuntu bug
@vp2177 if you're talking about the official Ubuntu package, issues are best reported at Ubuntu's official bug tracker. Not something podman upstream can handle.
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind feature
Description
Ubuntu 22's apt repository has
podman version 3.4.4
when installing via apt.This is concerning because versions 3.4.5-7 all have security updates. Can we get the latest version 4.0.3 or at least version 3.4.7 in the apt repository?
Steps to reproduce the issue:
Install ubuntu 22 on an x86_64 cpu.
apt install podman
podman --version
Describe the results you received:
Describe the results you expected:
or