containers / buildah

A tool that facilitates building OCI images.
https://buildah.io
Apache License 2.0
7.36k stars 780 forks source link

buildah bud within container not storing built image in local image registry #2272

Closed artbegolli closed 3 years ago

artbegolli commented 4 years ago

Description

I'm attempting to conduct a simple buildah build from within a container.

Running buildah bud appears to be executed successfully but once the build is complete the image is not being stored and neither are any individual layers. This is despite the build appearing to be completed successfully.

Pulling an image from remote works as intended with the image being stored.

Steps to reproduce the issue: 1.docker pull abegolli/ocib-buildah:latest This pulls a simple example from https://github.com/artbegolli/buildah-noroot.

  1. docker run -it --security-opt seccomp:unconfined abegolli/ocib-buildah:latest
  2. buildah bud --storage-driver vfs -t test-image:0.1.0 .
  3. buildah images

Describe the results you received:

No built images will be present on the local image registry.

Describe the results you expected:

The successfully built image to be stored in the local image registry.

Output of rpm -q buildah or apt list buildah:

buildah-1.14.0-2.fc31.x86_64

Output of buildah version:

Version:         1.14.0
Go Version:      go1.13.6
Image Spec:      1.0.1-dev
Runtime Spec:    1.0.1-dev
CNI Spec:        0.4.0
libcni Version:  
image Version:   5.2.0
Git Commit:      
Built:           Thu Jan  1 00:00:00 1970
OS/Arch:         linux/amd64

*Output of `cat /etc/release`:**

Fedora release 31 (Thirty One)
NAME=Fedora
VERSION="31 (Container Image)"
ID=fedora
VERSION_ID=31
VERSION_CODENAME=""
PLATFORM_ID="platform:f31"
PRETTY_NAME="Fedora 31 (Container Image)"
ANSI_COLOR="0;34"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:31"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f31/system-administrators-guide/"
SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=31
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=31
PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
VARIANT="Container Image"
VARIANT_ID=container
Fedora release 31 (Thirty One)
Fedora release 31 (Thirty One)

Output of uname -a:

Linux d6acceca4386 4.19.76-linuxkit #1 SMP Thu Oct 17 19:31:58 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Output of cat /etc/containers/storage.conf:

# This file is is the configuration file for all tools
# that use the containers/storage library.
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]

# Default Storage Driver
driver = "overlay"

# Temporary storage location
runroot = "/var/run/containers/storage"

# Primary Read/Write location of container storage
graphroot = "/var/lib/containers/storage"

[storage.options]
# Storage options to be passed to underlying storage drivers

# AdditionalImageStores is used to pass paths to additional Read/Only image stores
# Must be comma separated list.
additionalimagestores = [
"/var/lib/shared",
]

# Size is used to set a maximum size of the container image.  Only supported by
# certain container storage drivers.
size = ""

# Path to an helper program to use for mounting the file system instead of mounting it
# directly.
mount_program = "/usr/bin/fuse-overlayfs"

# OverrideKernelCheck tells the driver to ignore kernel checks based on kernel version
override_kernel_check = "true"

# mountopt specifies comma separated list of extra mount options
mountopt = "nodev,metacopy=on"

# Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of
# a container, to UIDs/GIDs as they should appear outside of the container, and
# the length of the range of UIDs/GIDs.  Additional mapped sets can be listed
# and will be heeded by libraries, but there are limits to the number of
# mappings which the kernel will allow when you later attempt to run a
# container.
#
# remap-uids = 0:1668442479:65536
# remap-gids = 0:1668442479:65536

# Remap-User/Group is a name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid or /etc/subgid file.  Mappings are set up starting
# with an in-container ID of 0 and the a host-level ID taken from the lowest
# range that matches the specified name, and using the length of that range.
# Additional ranges are then assigned, using the ranges which specify the
# lowest host-level IDs first, to the lowest not-yet-mapped container-level ID,
# until all of the entries have been used for maps.
#
# remap-user = "storage"
# remap-group = "storage"

[storage.options.thinpool]
# Storage Options for thinpool

# autoextend_percent determines the amount by which pool needs to be
# grown. This is specified in terms of % of pool size. So a value of 20 means
# that when threshold is hit, pool will be grown by 20% of existing
# pool size.
# autoextend_percent = "20"

# autoextend_threshold determines the pool extension threshold in terms
# of percentage of pool size. For example, if threshold is 60, that means when
# pool is 60% full, threshold has been hit.
# autoextend_threshold = "80"

# basesize specifies the size to use when creating the base device, which
# limits the size of images and containers.
# basesize = "10G"

# blocksize specifies a custom blocksize to use for the thin pool.
# blocksize="64k"

# directlvm_device specifies a custom block storage device to use for the
# thin pool. Required if you setup devicemapper.
# directlvm_device = ""

# directlvm_device_force wipes device even if device already has a filesystem.
# directlvm_device_force = "True"

# fs specifies the filesystem type to use for the base device.
# fs="xfs"

# log_level sets the log level of devicemapper.
# 0: LogLevelSuppress 0 (Default)
# 2: LogLevelFatal
# 3: LogLevelErr
# 4: LogLevelWarn
# 5: LogLevelNotice
# 6: LogLevelInfo
# 7: LogLevelDebug
# log_level = "7"

# min_free_space specifies the min free space percent in a thin pool require for
# new device creation to succeed. Valid values are from 0% - 99%.
# Value 0% disables
# min_free_space = "10%"

# mkfsarg specifies extra mkfs arguments to be used when creating the base.
# device.
# mkfsarg = ""

# use_deferred_removal marks devicemapper block device for deferred removal.
# If the thinpool is in use when the driver attempts to remove it, the driver 
# tells the kernel to remove it as soon as possible. Note this does not free
# up the disk space, use deferred deletion to fully remove the thinpool.
# use_deferred_removal = "True"

# use_deferred_deletion marks thinpool device for deferred deletion.
# If the device is busy when the driver attempts to delete it, the driver
# will attempt to delete device every 30 seconds until successful.
# If the program using the driver exits, the driver will continue attempting
# to cleanup the next time the driver is used. Deferred deletion permanently
# deletes the device and all data stored in device will be lost.
# use_deferred_deletion = "True"

# xfs_nospace_max_retries specifies the maximum number of retries XFS should
# attempt to complete IO when ENOSPC (no space) error is returned by
# underlying storage device.
# xfs_nospace_max_retries = "0"

# If specified, use OSTree to deduplicate files with the overlay backend
ostree_repo = ""

# Set to skip a PRIVATE bind mount on the storage home directory.  Only supported by
# certain container storage drivers
skip_mount_home = "false"

Example terminal output from conducting a build:

Screenshot 2020-04-03 at 17 57 45

Terminal output from conducting a pull storing the image successfully:

Screenshot 2020-04-03 at 17 59 59
TomSweeneyRedHat commented 4 years ago

@artbegolli thanks for the issue report. A couple quick questions. Can you include the the Dockerfile that you used? I notice the systemd warnings. Did you make changes to the cgroup_manager on this system?

artbegolli commented 4 years ago

Hey @TomSweeneyRedHat, thanks for the reply. The base image I used in this example is simply the quay.io/buildah/stable base images with one of our tools also installed on it.

But for reference - here's the same issue running on quay.io/buildah/stable:latest:

Screenshot 2020-04-03 at 22 37 13

I'm totally unsure regarding the cgroups warning - this is the case even with running even the standard quay.io/buildah/stable:latest image. Unless this could possibly be impacted by my Docker runtime? AFAIK I've made no changes to any cgroup_manager?

rhatdan commented 4 years ago

@TomSweeneyRedHat Now that we have containers.conf patch we can set this in the buildah image, We can change the default to always use cgroup-manager==cgroupfs

artbegolli commented 4 years ago

Any additional thoughts on this @TomSweeneyRedHat?

glinkaf commented 4 years ago

Having the same issue with buildah-stable:v1.14.8 @ OpenShift 4.4.3. :(

However, without any warnings at all. There is just no image after the builda-bud build.

rhatdan commented 4 years ago

@glinkaf @artbegolli Are you still seeing this issue?

Fodoj commented 4 years ago

I have the same with latest buildah stable image, image is built but its not in the list of buildah images

Fodoj commented 4 years ago

I am also using vfs storage driver, because I am on K3s with containerd, so no clue how to add fuse device to each container there, documentation of containerd is not helping there.

buildah version 1.15.0 (image-spec 1.0.1-dev, runtime-spec 1.0.2-dev)
Fodoj commented 4 years ago

Seems like the problem is very easy to resolve - one needs to add --storage-driver vfs also to images command (and all other commands) to make it work. Or rather set env var for this:

export STORAGE_DRIVER=vfs
buildah bud .
buildah images

@artbegolli does it work for you if you added storage-driver?

TomSweeneyRedHat commented 4 years ago

@Fodoj and @artbegolli sorry, a gazillion things at once and I've shuffled this off the radar. @Fodoj are you thinking we need to adjust the build process for the stable image or is the added option suitable? I'm a little hesitant to force the storage driver to one instance, but if it breaks out of the box otherwise.... @rhatdan thoughts?

Fodoj commented 4 years ago

After doing the research it seems that for “building in container”, especially “building in container running on Kubernetes”, vfs driver is way easier to use - and fuse in many cases is just way too much effort to enable (see my K3s + containerd example). Or at least that’s what many people do - fallback to vfs due to fuse requiring some host-level changes.

artbegolli commented 4 years ago

After doing the research it seems that for “building in container”, especially “building in container running on Kubernetes”, vfs driver is way easier to use - and fuse in many cases is just way too much effort to enable (see my K3s + containerd example). Or at least that’s what many people do - fallback to vfs due to fuse requiring some host-level changes.

This was my experience as well - although the performance implications of vfs are a bit discouraging.

@TomSweeneyRedHat It seems as though in order to use buildah successfully to build images in a container atm you need to dig through a number of issues to be able to do so (or have an in depth knowledge of fuse and linux filesystems). I reckon a quick win is just documenting the process somewhere.

rhatdan commented 3 years ago

This is not a bug in buildah, we plan on writing a series of blogs on running podman/buildah in a container, and including section on podman/buildah inside of kubernetes.