Closed soukainakhalkhouli closed 2 years ago
Looks like tekton is taking away some SYSCALLs required for podman to run within a container.
thank you @rhatdan
Any idea on how i can fix the issue
Docker build images that i'm using :
#!/bin/bash
# uncomment to debug the script
#set -x
# copy the script below into your app code repo (e.g. ./scripts/build_image.sh) and 'source' it from your pipeline job
# source ./scripts/build_image.sh
# alternatively, you can source it from online script:
# source <(curl -sSL "https://raw.githubusercontent.com/open-toolchain/commons/master/scripts/build_image.sh")
# ------------------
# source: https://raw.githubusercontent.com/open-toolchain/commons/master/scripts/build_image.sh
# This script does build a Docker image into IBM Container Service private image registry, and copies information into
# a build.properties file, so they can be reused later on by other scripts (e.g. image url, chart name, ...)
echo "REGISTRY_URL=${REGISTRY_URL}"
echo "REGISTRY_NAMESPACE=${REGISTRY_NAMESPACE}"
echo "IMAGE_NAME=${IMAGE_NAME}"
echo "BUILD_NUMBER=${BUILD_NUMBER}"
echo "ARCHIVE_DIR=${ARCHIVE_DIR}"
echo "GIT_BRANCH=${GIT_BRANCH}"
echo "GIT_COMMIT=${GIT_COMMIT}"
echo "DOCKER_ROOT=${DOCKER_ROOT}"
echo "DOCKER_FILE=${DOCKER_FILE}"
# View build properties
if [ -f build.properties ]; then
echo "build.properties:"
cat build.properties
else
echo "build.properties : not found"
fi
# also run 'env' command to find all available env variables
# or learn more about the available environment variables at:
# https://console.bluemix.net/docs/services/ContinuousDelivery/pipeline_deploy_var.html#deliverypipeline_environment
# To review or change build options use:
# bx cr build --help
echo -e "Existing images in registry"
ibmcloud cr images
repository=us.icr.io/cicd-test/hello-containers-tekton
#ibmcloud cr images | grep $repository | awk -F " +" '{print $2}' | sort --version-sort
tag=$(nver=$(($(ibmcloud cr images | grep $repository | awk -F " +" '{print $2}' | grep '^v[0-9][0-9]*' | sort --version-sort | tail -n 1 | cut -c2-)+1)); echo "v$nver")
# Minting image tag using format: BRANCH-BUILD_NUMBER-COMMIT_ID-TIMESTAMP
# e.g. master-3-50da6912-20181123114435
TIMESTAMP=$( date -u "+%Y%m%d%H%M%S")
IMAGE_TAG=${tag}
if [ ! -z "${GIT_COMMIT}" ]; then
GIT_COMMIT_SHORT=$( echo ${GIT_COMMIT} | head -c 8 )
IMAGE_TAG=${GIT_COMMIT_SHORT}-${IMAGE_TAG}
fi
IMAGE_TAG=${BUILD_NUMBER}-${IMAGE_TAG}
if [ ! -z "${GIT_BRANCH}" ]; then IMAGE_TAG=${GIT_BRANCH}-${IMAGE_TAG} ; fi
echo "=========================================================="
echo -e "BUILDING CONTAINER IMAGE: ${IMAGE_NAME}:${IMAGE_TAG}"
if [ -z "${DOCKER_ROOT}" ]; then DOCKER_ROOT=. ; fi
if [ -z "${DOCKER_FILE}" ]; then DOCKER_FILE=${DOCKER_ROOT}/Dockerfile ; fi
if [ -z "$EXTRA_BUILD_ARGS" ]; then
echo -e ""
else
for buildArg in $EXTRA_BUILD_ARGS; do
if [ "$buildArg" == "--build-arg" ]; then
echo -e ""
else
BUILD_ARGS="${BUILD_ARGS} --opt build-arg:$buildArg"
fi
done
fi
# # Checking if buildctl is installed
# if which buildctl > /dev/null 2>&1; then
# buildctl --version
# else
# echo "Installing Buildkit builctl"
# curl -sL https://github.com/moby/buildkit/releases/download/v0.8.1/buildkit-v0.8.1.linux-amd64.tar.gz | tar -C /tmp -xz bin/buildctl && mv /tmp/bin/buildctl /usr/bin/buildctl && rmdir --ignore-fail-on-non-empty /tmp/bin
# buildctl --version
# fi
echo "Logging into regional IBM Container Registry"
ibmcloud cr region-set ${REGION}
ibmcloud cr login
#set -x
## DEPRECTATED ibmcloud cr build -t ${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME}:${IMAGE_TAG} ${DOCKER_ROOT} -f ${DOCKER_FILE}
# buildctl --addr tcp://0.0.0.0:1234 build \
# --frontend dockerfile.v0 --opt filename=${DOCKER_FILE} --local dockerfile=${DOCKER_ROOT} \
# ${BUILD_ARGS} --local context=${DOCKER_ROOT} \
# --import-cache type=registry,ref=${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME} \
# --output type=image,name="${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME}:${IMAGE_TAG}",push=true
# set +x
#echo "Installing podman-1"
#echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/#/ /"
#curl -L "https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/Release.key" | sudo apt-key add -
#sudo apt-get update
#sudo apt-get -y upgrade
#sudo apt-get -y install podman
#sudo apt-get install -y podman
echo "apt update"
apt update -y
echo "update was done successfully"
echo "apt upgrade"
apt upgrade -y
echo "upgrade was done successfully"
echo "adding kubic project repo-1"
. /etc/os-release
echo "adding kubic project repo-2"
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_18.04/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "adding kubic project repo-3"
curl -L "https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_18.04/Release.key" | sudo apt-key add -
echo "adding kubic project repo-4"
echo "apt update-2"
apt update -y
echo "Installing podman"
sudo apt -y install podman
echo "podman installed successfully.."
# source /etc/os-release
# echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
# curl -L "https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/Release.key" | sudo apt-key add -
# echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /"|sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
# echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$OS/ /"|sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION.list
# sudo apt-get install ca-certificates
# sudo apt-get update
# sudo apt update
# sudo apt -y install podman
# podman info
#sudo apt update
#sudo apt upgrade
#sudo apt install podman
echo "checking podman version-6"
podman --version
echo "installing iptables"
apt install -y -qq iptables
# echo "Displaying podman images-1"
# podman images
# echo "overlays-step-1"
# podman run -it --rm -v /var/run/containers/storage:/var/run/containers/storage -v /var/lib/containers/storage:/var/lib/containers/storage --storage-driver=overlay --privileged=true mine bash
# echo "overlays-step-2"
# echo "installing fuse-overlayfs"
# sudo apt install fuse-overlayfs -y
# echo "storage setting-up fuse-overlayfs"
# podman --storage-opt mount_program=/usr/bin/fuse-overlayfs
# echo "storage setting-up fuse-overlayfs-1"
# echo "pulling alpine image using podman"
# podman pull alpine
# echo "Displaying podman images-2"
# podman images
# echo 'Displying the podman info | grep "store\:" -A25'
# podman info | grep "store\:" -A25
# echo 'Displyed the podman info | grep "store\:" -A25'
echo "Building image using podman"
#podman build -t ${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME}:${IMAGE_TAG} ${DOCKER_ROOT} -f ${DOCKER_FILE}
#podman build --device /dev/fuse:rw -t ${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME}:${IMAGE_TAG} ${DOCKER_ROOT} -f ${DOCKER_FILE} --log-level debug
sudo podman build --storage-driver vfs -t ${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME}:${IMAGE_TAG} ${DOCKER_ROOT} -f ${DOCKER_FILE} --log-level debug
#podman build --storage-driver vfs -t testin-image -f . --log-level debug
ibmcloud cr image-inspect ${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME}:${IMAGE_TAG}
# Set PIPELINE_IMAGE_URL for subsequent jobs in stage (e.g. Vulnerability Advisor)
export PIPELINE_IMAGE_URL="$REGISTRY_URL/$REGISTRY_NAMESPACE/$IMAGE_NAME:$IMAGE_TAG"
ibmcloud cr images --restrict ${REGISTRY_NAMESPACE}/${IMAGE_NAME}
######################################################################################
# Copy any artifacts that will be needed for deployment and testing to $WORKSPACE #
######################################################################################
echo "=========================================================="
echo "COPYING ARTIFACTS needed for deployment and testing (in particular build.properties)"
echo "Checking archive dir presence"
if [ -z "${ARCHIVE_DIR}" ]; then
echo -e "Build archive directory contains entire working directory."
else
echo -e "Copying working dir into build archive directory: ${ARCHIVE_DIR} "
mkdir -p ${ARCHIVE_DIR}
find . -mindepth 1 -maxdepth 1 -not -path "./$ARCHIVE_DIR" -exec cp -R '{}' "${ARCHIVE_DIR}/" ';'
fi
# Persist env variables into a properties file (build.properties) so that all pipeline stages consuming this
# build as input and configured with an environment properties file valued 'build.properties'
# will be able to reuse the env variables in their job shell scripts.
# If already defined build.properties from prior build job, append to it.
cp build.properties $ARCHIVE_DIR/ || :
# IMAGE information from build.properties is used in Helm Chart deployment to set the release name
echo "IMAGE_NAME=${IMAGE_NAME}" >> $ARCHIVE_DIR/build.properties
echo "IMAGE_TAG=${IMAGE_TAG}" >> $ARCHIVE_DIR/build.properties
# REGISTRY information from build.properties is used in Helm Chart deployment to generate cluster secret
echo "REGISTRY_URL=${REGISTRY_URL}" >> $ARCHIVE_DIR/build.properties
echo "REGISTRY_NAMESPACE=${REGISTRY_NAMESPACE}" >> $ARCHIVE_DIR/build.properties
echo "GIT_BRANCH=${GIT_BRANCH}" >> $ARCHIVE_DIR/build.properties
echo "File 'build.properties' created for passing env variables to subsequent pipeline jobs:"
cat $ARCHIVE_DIR/build.properties
DockerFile is : `FROM alpine:latest
COPY . /app WORKDIR /app RUN pip install -r requirements.txt ENTRYPOINT ["python"] CMD ["app.py"] `
Can you cause the container used to build to run in privileged mode?
@soukainakhalkhouli Adding securityContext to your tekton step should grant enough priviledge
securityContext:
privileged: true
Yes @flouthoc @rhatdan it's already used
@soukainakhalkhouli Could you try --security-opt seccomp=unconfined --cap-add all
with your podman build
command i think its a seccomp
issue.
I have tested by using this command and i'm getting the same issue:
sudo podman build --storage-driver vfs --security-opt seccomp=unconfined --cap-add all -t ${REGISTRY_URL}/${REGISTRY_NAMESPACE}/${IMAGE_NAME}:${IMAGE_TAG} ${DOCKER_ROOT} -f ${DOCKER_FILE} --log-level debug
Could you also share tekton task yaml and the content of ${DOCKER_FILE}
as well thanks.
Yes, sure bellow all files in attachments: DockerFile: `FROM alpine:latest
COPY . /app WORKDIR /app RUN pip install -r requirements.txt ENTRYPOINT ["python"] CMD ["app.py"] `
@soukainakhalkhouli I can't see any attached yaml for your tekton task could you check again please.
Task.yaml :
[task.txt](https://github.com/containers/podman/files/8370770/task.txt)
@flouthoc please check this link https://github.com/containers/podman/files/8370770/task.txt
A friendly reminder that this issue had no activity for 30 days.
Shared task contains buildkit instead of buildah but i guess that was intentional, Could you try something like, I see no reason for this to not work.
- name: build
image: quay.io/buildah/stable:latest
workingDir: $(workspaces.source.path)
script: |
buildah --storage-driver=vfs build \
--no-cache -f <path-to-dockerfile> -t <image-tag> .
volumeMounts:
- name: varlibcontainers
mountPath: /var/lib/containers
securityContext:
privileged: true
A friendly reminder that this issue had no activity for 30 days.
Shared task contains buildkit instead of buildah but i guess that was intentional, Could you try something like, I see no reason for this to not work.
- name: build image: quay.io/buildah/stable:latest workingDir: $(workspaces.source.path) script: | buildah --storage-driver=vfs build \ --no-cache -f <path-to-dockerfile> -t <image-tag> . volumeMounts: - name: varlibcontainers mountPath: /var/lib/containers securityContext: privileged: true
Closing this since there was no reply for the suggested change but please feel free to reopen or comment below.
While building images using teckton pipeline i’m getting this error : Writing manifest to image destination Storing signatures time="2022-03-28T09:48:59Z" level=debug msg="Start untar layer" time="2022-03-28T09:48:59Z" level=error msg="Error while applying layer: ApplyLayer exit status 1 stdout: stderr: Error creating mount namespace before pivot: operation not permitted" time="2022-03-28T09:48:59Z" level=debug msg="error copying src image [\"docker://alpine:latest\"] to dest image [\"docker.io/library/alpine:latest\"] err: Error committing the finished image: error adding layer with blob \"sha256:3aa4d0bbde192bfaba75f2d124d8cf2e6de452ae03e55d54105e46b06eb8127e\": ApplyLayer exit status 1 stdout: stderr: Error creating mount namespace before pivot: operation not permitted" time="2022-03-28T09:48:59Z" level=debug msg="error pulling image \"docker://alpine:latest\": Error committing the finished image: error adding layer with blob \"sha256:3aa4d0bbde192bfaba75f2d124d8cf2e6de452ae03e55d54105e46b06eb8127e\": ApplyLayer exit status 1 stdout: stderr: Error creating mount namespace before pivot: operation not permitted" time="2022-03-28T09:48:59Z" level=debug msg="unable to pull and read image \"docker.io/library/alpine:latest\": Error committing the finished image: error adding layer with blob \"sha256:3aa4d0bbde192bfaba75f2d124d8cf2e6de452ae03e55d54105e46b06eb8127e\": ApplyLayer exit status 1 stdout: stderr: Error creating mount namespace before pivot: operation not permitted" time="2022-03-28T09:48:59Z" level=error msg="unable to write build event: \"write unixgram @0c223->/run/systemd/journal/socket: sendmsg: no such file or directory\"" Error: error creating build container: Error committing the finished image: error adding layer with blob "sha256:3aa4d0bbde192bfaba75f2d124d8cf2e6de452ae03e55d54105e46b06eb8127e": ApplyLayer exit status 1 stdout: stderr: Error creating mount namespace before pivot: operation not permitted