redhat-buildpacks / testing

Project aiming to help us to perform e2e tests using Buildpacks
Apache License 2.0
0 stars 3 forks source link

Test: Build the ubi stack image using tool: jam #60

Open cmoulliard opened 2 months ago

cmoulliard commented 2 months ago

Test case

Test to build the ubi stack images (run/build) for each stack: java, nodejs, etc using as tool: jam

Status: NOK - see report stack-test-30-aug-2024 Repo: https://github.com/paketo-community/ubi-base-stack Tool: jam Container tool: podman OS: rhel9

Steps to reproduce:

cmoulliard commented 2 months ago

See issue: https://github.com/paketo-buildpacks/jam/issues/345

BarDweller commented 2 months ago

Unable to recreate..

+ cat /etc/fedora-release
Fedora release 39 (Thirty Nine)
+ podman --version
podman version 4.9.5
+ export DOCKER_HOST=unix:///run/user/1000/podman/podman.sock
+ DOCKER_HOST=unix:///run/user/1000/podman/podman.sock
+ systemctl --user start podman.socket
+ systemctl --user status podman.socket
● podman.socket - Podman API Socket
     Loaded: loaded (/usr/lib/systemd/user/podman.socket; enabled; preset: disabled)
     Active: active (listening) since Wed 2024-09-04 09:32:13 EDT; 2h 21min ago
   Triggers: ● podman.service
       Docs: man:podman-system-service(1)
     Listen: /run/user/1000/podman/podman.sock (Stream)
     CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/podman.socket

Sep 04 09:32:13 PODMANVM systemd[1033131]: Listening on podman.socket - Podman API Socket.
+ ./jam-linux-amd64 version
jam 2.9.0
+ git clone https://github.com/paketo-community/ubi-base-stack
Cloning into 'ubi-base-stack'...
remote: Enumerating objects: 758, done.
remote: Counting objects: 100% (480/480), done.
remote: Compressing objects: 100% (230/230), done.
remote: Total 758 (delta 324), reused 333 (delta 237), pack-reused 278 (from 1)
Receiving objects: 100% (758/758), 505.21 KiB | 4.11 MiB/s, done.
Resolving deltas: 100% (406/406), done.
+ cd ubi-base-stack
+ cp images.json stack
+ cd stack
+ ../../jam-linux-amd64 create-stack --config stack.toml --build-output build --run-output run
Building io.buildpacks.stacks.ubi8
  Building on linux/amd64
    Building base images
      Build complete for base images
    build: Decorating base image
      Adding CNB_* environment variables
      Adding io.buildpacks.stack.* labels
      Creating cnb user
    run: Decorating base image
      Adding io.buildpacks.stack.* labels
      Creating cnb user
      Updating /etc/os-release
    build: Updating image
    run: Updating image

  Exporting build image to build
  Exporting run image to run
cmoulliard commented 2 months ago

The workaround as documented here: https://github.com/containers/podman/issues/3234 is to do:

sudo setenforce 0
cmoulliard commented 2 months ago

Question posted to Brian Cook : Are we allowed to do sudo setenforce 0 on the remote RHEL 9 VM ? @brianwcook

BarDweller commented 2 months ago

Are the RHEL/Centos VMs broken in some manner?

I have selinux enabled on the host I ran the above on, and it worked fine.. (Fedora Workstation 39)

$ sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33
$ getenforce
Enforcing
cmoulliard commented 2 months ago

On RHEL9

[cloud-user@rhel-9 stack]$ sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33
BarDweller commented 2 months ago

Ok, so we both have selinux configured.. what version of podman are you using (I'm on 4.9.5 as shown above)..

This isn't a jam issue, or a paketo issue, this is an issue with RHEL(and any other affected platforms) selinux configuration for podman. Looks like fedora has it configured correctly, while the system you are using does not. Disabling the entire selinux subsystem with setenforce 0 is a pretty crude hammer to to fix this problem with, and maybe that problem is already fixed in later podman releases?

brianwcook commented 2 months ago

If it works on fedora, we could make that available. The every 6 months upgrade cycle is not ideal, but after one cycle we could move to rhel10 beta.

Or, if we can fix the selinux policy we can apply that to the rhel9 images

cmoulliard commented 2 months ago

Ok, so we both have selinux configured.. what version of podman are you using (I'm on 4.9.5 as shown above)..

podman version
Client:       Podman Engine
Version:      4.9.4-rhel
API Version:  4.9.4-rhel
Go Version:   go1.21.11 (Red Hat 1.21.11-1.el9_4)
Built:        Mon Jul  1 06:27:14 2024
OS/Arch:      linux/amd64
cmoulliard commented 2 months ago

and maybe that problem is already fixed in later podman releases?

Question posted to Brian Cook : Are we allowed to do sudo setenforce 0 on the remote RHEL 9 VM ?

Giuseppe Scrivano suggest to relabel the containers storage as documented here: https://access.redhat.com/solutions/7021610

WDYT ? @brianwcook @BarDweller

cmoulliard commented 2 months ago

I tested the approach as documented instead of using setenforce 0 but still no luck /bin/sh: error while loading shared libraries: libtinfo.so.6: cannot change memory protections

[cloud-user@rhel-9 stack]$ sudo semanage fcontext -d /home/cloud-user/.local/share/containers/storage
[cloud-user@rhel-9 stack]$ sudo semanage fcontext -a -e /var/lib/containers/storage /home/cloud-user/.local/share/containers/storage
[cloud-user@rhel-9 stack]$ sudo restorecon -R -v /home/cloud-user/.local/share/containers/storage
Relabeled /home/cloud-user/.local/share/containers/storage from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/libpod from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/db.sql from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/overlay from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/overlay/l from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/overlay/.has-mount-program from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/storage.lock from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/userns.lock from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/overlay-images from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/overlay-images/images.lock from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/overlay-containers from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/overlay-containers/containers.lock from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/defaultNetworkBackend from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/networks from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/networks/netavark.lock from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/overlay-layers from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0
Relabeled /home/cloud-user/.local/share/containers/storage/overlay-layers/layers.lock from unconfined_u:object_r:data_home_t:s0 to unconfined_u:object_r:var_lib_t:s0

sudo systemctl restart podman

podman build -f build.Dockerfile .
STEP 1/5: FROM registry.access.redhat.com/ubi8/ubi-minimal:latest
Trying to pull registry.access.redhat.com/ubi8/ubi-minimal:latest...
Getting image source signatures
Checking if image destination supports signatures
Copying blob 2384c7c17092 done   |
Copying config 12c4198317 done   |
Writing manifest to image destination
Storing signatures
STEP 2/5: USER root
--> 296fba9aee8e
STEP 3/5: WORKDIR /etc/buildpacks
--> 1b5bceb8b1d7
STEP 4/5: COPY images.json images.json
--> 1885a8b4d84c
STEP 5/5: RUN chmod 644 images.json
/bin/sh: error while loading shared libraries: libtinfo.so.6: cannot change memory protections
Error: building at STEP "RUN chmod 644 images.json": while running runtime: exit status 127

doc link: https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-selinux_contexts_labeling_files-persistent_changes_semanage_fcontext#sect-Security-Enhanced_Linux-SELinux_Contexts_Labeling_Files-Persistent_Changes_semanage_fcontext