Closed jorgemoralespou closed 3 years ago
It is possible add support for podman
by running it as a service with podman system service
and adding detection of podman
socket https://github.com/containers/libpod/issues/4499#issuecomment-609893190 The socket is meant to be Docker-compatible.
However, for other builders, such as LXD, a more generalized interface could be used. Canonical maintains https://snapcraft.io/multipass as generalized interface for running containers and VM.
Pack, although not depending on docker build per this comment it does require Docker to be running on your container.
When you want to run pack as part of your CI/CD process or any other requisite (learning purposes) you might run it in a container on a Kubernetes platform and in order to run you will need to expose the Docker socket on the host machine and making the whole platform insecure.
Building containers should be a secure process that does not compromise your system in any possible way.
The pack CLI is intended to be a tool for running Cloud Native Buildpack builds on a local workstation that doesn't natively support containers (often Windows or macOS). While it seems reasonable to support other local container runtimes besides Docker, CI platforms that support running container images are probably better off running the lifecycle directly, without needed a nested container runtime. (This doesn't require any privileges or capabilities.)
Here's a complex example of this for Tekton: https://github.com/buildpacks/tekton-catalog/blob/master/buildpacks/buildpacks-v3.yaml
Relevantly, we've recently introduced a single lifecycle command that threads all of those steps together: https://github.com/buildpacks/rfcs/blob/master/text/0026-lifecycle-all.md
When that functionality is documented, it should make running CNB builds on CI platforms much easier.
Using kpack on the platform can be an alternative although AFAIK can have the same considerations on security (or lack of security).
Kpack uses the lifecycle directly, and doesn't depend on a Docker daemon or expose a Docker socket. Builds run in unprivileged containers that are fully isolated from registry credentials.
Another way of describing this is: the lifecycle is comparable to kaniko or other unprivileged image building tools. The pack CLI is glue code that makes it easy to use lifecycle with the Docker daemon. We could expand the functionality of the pack CLI so that it acts as glue code for other container runtimes, but that glue code is only necessary when containers are not natively accessible already. Maybe that's a good idea, but I'd like to hear concrete use cases first.
@sclevine building random projects on a local workstation without containerization has a risk of killing the system or build environment. If pack
does this, it is insecure by design.
building random projects on a local workstation without containerization has a risk of killing the system or build environment. If pack does this, it is insecure by design.
This is not what I'm suggesting (or permitted by the CNB specification). Running the lifecycle directly is only supported on CI platforms that support running container images (such as a CNB builder image with the lifecycle binary).
I'm suggesting that supporting another container runtime would only benefit desktop Linux users and users of non-container CI systems. That doesn't match the requested use case:
When you want to run pack as part of your CI/CD process or any other requisite (learning purposes) you might run it in a container on a Kubernetes platform and in order to run you will need to expose the Docker socket on the host machine and making the whole platform insecure.
@sclevine sorry, but your assumption that supporting another container runtime would benefit users of non-container CI systems contains logical error to me. I also don't see the connection to Linux desktop, which is about having Gnome or another WM.
If you want to say, that as a DevOps I should not have ability to use buildpacks on my Linux, and only do this in a self-hosted or vendor cloud, then I disagree. The system should be simple enough to troubleshoot it in parts gradually.
The last part of requested use case mentions my system explicitly.
Building containers should be a secure process that does not compromise your system in any possible way.
There are currently two ways to use the tooling provided by the Cloud Native Buildpacks project:
With the pack CLI. This uses Docker (purely as a container runtime, without using docker build
) to run a builder image (which contains the lifecycle). Docker is available and easy to install on macOS, Windows, and Linux.
Without the pack CLI, by executing a builder image directly on platform that can already run containers (like k8s). Tekton, kpack, and concourse use this strategy. It does not require Docker or privileged containers.
While I imagine that we would welcome contributions to the pack CLI to add support for alternative container runtimes (like podman), those alternative container runtimes aren't easy to use on macOS or Windows. Additionally, platforms that support running container images natively (like k8s) wouldn't benefit from it, because they can already do what pack does (run builder images). Running pack inside of a container (which creates nested containers) is unnecessarily and decreases performance. The lifecycle can run directly in that container instead.
Therefore, as far as I can tell, only Linux users who don't want to build using Docker or K8s would benefit from support additional runtimes in the pack CLI. I'm not opposed to it, but I'm also not about to implement it myself 😄
Yes, I am interested that pack CLI in 1. allowed more secure alternatives than Docker.
With pack CLI
I can reuse CI/CD container pipelines that are more simple to maintain that verbose k8s configs for every step.
While I imagine that we would welcome contributions to the pack CLI to add support for alternative container runtimes (like podman), those alternative container runtimes aren't easy to use on macOS or Windows. @sclevine Well, VMware is working on a replacement for Docker Desktop (Windows/Mac) called Nautilus (https://vmwarefusion.github.io/). In essence, what you're saying is that any user that decides to adopt VMware's technology for containers/VMs on the Desktop will not be able to work with pack or develop a CNB locally? I hope that Nautilus provides a docker.socket and transparent connection to it from your local laptop :-(
Well, VMware is working on a replacement for Docker Desktop (Windows/Mac) called Nautilus (https://vmwarefusion.github.io/). In essence, what you're saying is that any user that decides to adopt VMware's technology for containers/VMs on the Desktop will not be able to work with pack or develop a CNB locally?
While I can't speak for the other core team members, I imagine that we would welcome contributions to make the pack CLI compatible with Nautilus (or, as I mentioned, other alternative container runtimes).
To be clear, given that pack's only job is to interface with the container runtime and run the lifecycle, there is no way to implement it generically to support any container runtime. The lifecycle is the generic component. So support for, e.g., Nautilus would need to be added to pack explicitly. Are you interested in submitting a PR for it?
With pack CLI I can reuse CI/CD container pipelines that are more simple to maintain that verbose k8s configs for every step.
I don't believe that setting up a CI/CD pipeline that uses pack to keep containers up-to-date is easier than using kpack. You would need to monitor for changes to a number of upstream resources (buildpacks, stack run images, stack build images, source code). A simple pipeline that uses the pack CLI might beat most Dockerfile-based pipelines, but you'd lose the stronger security guarantee that, e.g., kpack provides.
@zmackie
podman
provides Docker API over/run/user/$UID/podman/podman.sock
socket. But I can not find where in thepack
source this path can be detected, or set explicitly through a config.$ pack build myapp --builder heroku/buildpacks:18 -v Pulling image index.docker.io/heroku/buildpacks:18 ERROR: failed to fetch builder image 'index.docker.io/heroku/buildpacks:18': Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
https://github.com/buildpacks/pack/issues/413#issuecomment-609900787
whoops!
@abitrolly I think setting DOCKER_HOST
to the podman socket location should just work, assuming podman provides the same daemon API as Docker. I don't think anyone has tested this though.
we're in the process of testing this. cc @grahamdumpleton Will update here with our findings
It failed is all I can say:
# pack build sample-java-app --path sample-java-app
44cc64492fb6a6d78d3e6d087f380ae6e479aa1b2c79823b32cdacfcc2f3d715: pulling image () fERROR: invalid builder 'cloudfoundry/cnb:bionic': builder index.docker.io/cloudfoundry/cnb:bionic missing label io.buildpacks.builder.metadata -- try recreating builder
It is hard for me to take it any further at this point since don't understand enough about either the podman socket support or the process by which pack works.
Since I am doing this from inside of a container may also be complicating things. Really should be tested directly on a full Fedora operating system initially.
@GrahamDumpleton can you try specifying a builder?
pack build sample-java-app -B cnbs/sample-builder:alpine --path sample-java-app
The builder was already set previously using:
# pack set-default-builder cloudfoundry/cnb:bionic
Builder cloudfoundry/cnb:bionic is now the default builder
Using different builder on command line makes no difference.
# pack build sample-java-app -B cnbs/sample-builder:alpine --path sample-java-app
93b31bfcf2537f44dad74107cd5ad9beae36e1e769f653c30847bb045bb85e12: pulling image () fERROR: invalid builder 'cnbs/sample-builder:alpine': builder index.docker.io/cnbs/sample-builder:alpine missing label io.buildpacks.builder.metadata -- try recreating builder
@sclevine specifying DOCKER_HOST
works for connecting to podman
.
podman system service &
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
pack build myapp --builder heroku/heroku-buildpack-ruby -v
However, the pack
then fails with the same error as @GrahamDumpleton mentioned.
$ pack build myapp --builder heroku/buildpacks:18 -v
Pulling image index.docker.io/heroku/buildpacks:18
8d10618c5b3b5b560c75e0353572b843e3a0d958eb3c6ff452519a7f7be5ea55: pulling image () from docker.io/heroku/buildpacks:18
ERROR: invalid builder 'heroku/buildpacks:18': builder index.docker.io/heroku/buildpacks:18 missing label io.buildpacks.builder.metadata -- try recreating builder
I haven't got the environment setup to check again myself, but if you run podman images
is the builder image actually there? If yes, can you inspect it to see what labels it does have set.
@GrahamDumpleton the images is there. Here are the labels.
$ podman inspect docker.io/heroku/buildpacks:18
...
"Labels": {
"io.buildpacks.builder.metadata": "{\"description\":\"\",\"buildpacks\":[{\"id\":\"heroku/maven\",\"version\":\"0.1\"},{\"id\":\"heroku/jvm\",\"version\":\"0.1\"},{\"id\":\"heroku/ruby\",\"version\":\"0.0.1\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\"},{\"id\":\"heroku/python\",\"version\":\"0.1.2\"},{\"id\":\"heroku/gradle\",\"version\":\"0.1.2\"},{\"id\":\"heroku/scala\",\"version\":\"0.1.2\"},{\"id\":\"heroku/php\",\"version\":\"0.1.2\"},{\"id\":\"heroku/go\",\"version\":\"0.1.2\"},{\"id\":\"heroku/nodejs-engine\",\"version\":\"0.4.3\"},{\"id\":\"heroku/nodejs-npm\",\"version\":\"0.1.4\"},{\"id\":\"heroku/nodejs-yarn\",\"version\":\"0.0.1\"}],\"stack\":{\"runImage\":{\"image\":\"heroku/pack:18\",\"mirrors\":null}},\"lifecycle\":{\"version\":\"0.6.1\",\"api\":{\"buildpack\":\"0.2\",\"platform\":\"0.2\"}},\"createdBy\":{\"name\":\"Pack CLI\",\"version\":\"v0.9.0 (git sha: d42c384a39f367588f2653f2a99702db910e5ad7)\"}}",
"io.buildpacks.buildpack.layers": "{\"heroku/go\":{\"0.1.2\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:8728779d674e06126ecede7af51a209d9d2c72577e71225acceda18e49c5515d\"}},\"heroku/gradle\":{\"0.1.2\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:342156a961934502ac4881585091c1538f1b9f0ad4d1df1ff8e2b76ddb62c4ce\"}},\"heroku/jvm\":{\"0.1\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"},{\"id\":\"io.buildpacks.stacks.bionic\"}],\"layerDiffID\":\"sha256:510b0e4d3fe6d3d68fc862ed098eed2c1042cc1e3348c81393acd4119f1ed381\"}},\"heroku/maven\":{\"0.1\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:fa0249cc869733cbf2ecbc9a266ed818c7b440f59f2e1cabef8a4f514a819126\"}},\"heroku/nodejs-engine\":{\"0.4.3\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"},{\"id\":\"io.buildpacks.stacks.bionic\"}],\"layerDiffID\":\"sha256:445c10f941efec2279767c320afb497e7ef85dd5c919a40ecb9d8bcac9826009\"}},\"heroku/nodejs-npm\":{\"0.1.4\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"},{\"id\":\"io.buildpacks.stacks.bionic\"}],\"layerDiffID\":\"sha256:6db38654c28768fd61d817dc4c5dd0843390d3995b8a80da761c5de7d86fc2e9\"}},\"heroku/nodejs-yarn\":{\"0.0.1\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:a526227220f571466890bbfc2fb7720587251339b50a574fe9b2f1e43b99a6e8\"}},\"heroku/php\":{\"0.1.2\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:3682dbc263721ff0594905ac825447dcc4df98835f87182ad8670677340c8f04\"}},\"heroku/procfile\":{\"0.5\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"},{\"id\":\"io.buildpacks.stacks.bionic\"}],\"layerDiffID\":\"sha256:630571793248869ed92fe7d0b6afc055204fd634cd627f3318f4bdbc9627ceb7\"}},\"heroku/python\":{\"0.1.2\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:6a5e32362fc1d3c4a493fcd0ec2ad09256bebdf667d1d8147ee806c7a522112a\"}},\"heroku/ruby\":{\"0.0.1\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:f4220c9fb652014fde63510985106325d4cffc74a9355d2af162fbde9c6da4a2\"}},\"heroku/scala\":{\"0.1.2\":{\"api\":\"0.2\",\"stacks\":[{\"id\":\"heroku-18\"}],\"layerDiffID\":\"sha256:233d963fddf3390e58ed55220d6f5420976f7e39ba2a29b1f759171855544d80\"}}}",
"io.buildpacks.buildpack.order": "[{\"group\":[{\"id\":\"heroku/ruby\",\"version\":\"0.0.1\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/python\",\"version\":\"0.1.2\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/jvm\",\"version\":\"0.1\"},{\"id\":\"heroku/maven\",\"version\":\"0.1\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/gradle\",\"version\":\"0.1.2\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/scala\",\"version\":\"0.1.2\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/php\",\"version\":\"0.1.2\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/go\",\"version\":\"0.1.2\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/nodejs-engine\",\"version\":\"0.4.3\"},{\"id\":\"heroku/nodejs-yarn\",\"version\":\"0.0.1\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]},{\"group\":[{\"id\":\"heroku/nodejs-engine\",\"version\":\"0.4.3\"},{\"id\":\"heroku/nodejs-npm\",\"version\":\"0.1.4\"},{\"id\":\"heroku/procfile\",\"version\":\"0.5\",\"optional\":true}]}]",
"io.buildpacks.stack.id": "heroku-18",
"io.buildpacks.stack.mixins": "null"
},
...
I could find out which requests are being sent to podman
and repeat them with curl
. The bug is most likely in podman
1.8.2, which Docker API doesn't return labels like podman inspect
command does.
$ podman system service --log-level debug
...
DEBU[0015] APIHandler -- Method: POST URL: /v1.38/images/create?fromImage=heroku%2Fbuildpacks&tag=18 (conn 0/0)
DEBU[0015] parsed reference into "[overlay@/home/anatoli/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]docker.io/heroku/buildpacks:18"
DEBU[0015] APIHandler -- Method: GET URL: /v1.38/images/index.docker.io/heroku/buildpacks:18/json (conn 0/1)
DEBU[0015] parsed reference into "[overlay@/home/anatoli/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]docker.io/heroku/buildpacks:18"
DEBU[0015] parsed reference into "[overlay@/home/anatoli/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]@c533962c38b1b71b08ff03d07119d9d63f82d03192076016743cdde9d79fbd70"
DEBU[0015] exporting opaque data as blob "sha256:c533962c38b1b71b08ff03d07119d9d63f82d03192076016743cdde9d79fbd70"
DEBU[0020] APIServer.Shutdown called 2020-04-19 07:46:17.331984794 +0300 +03 m=+20.612378751, conn 0/2
$ curl -sS --unix-socket $XDG_RUNTIME_DIR/podman/podman.sock http:/v1.38/images/index.docker.io/heroku/buildpacks:18/json | jq . | grep Labels
"Labels": null
"Labels": null
+1 to this, I was already thinking about how pack
could be integrated in my soon-to-be kubernetes-ran CI system, and other kubernetes-based CD systems.
@sclevine Have also been hoping to run pack
in contexts without Docker daemons, specifically CI because the approach of spinning up an entire cluster with kpack
and configuring that entire workflow seems relatively heavy-handed compared to just running a Concourse task that just does pack build
.
Without the pack CLI, by executing a builder image directly on platform that can already run containers (like k8s). Tekton, kpack, and concourse use this strategy. It does not require Docker or privileged containers.
I have been poking around with trying to do this with the cloudfoundry/cnb:bionic
image; however, I noticed there is one significant omission: the exporter
from the buildpack lifecycle only allows exporting to a Docker daemon or directly uploading to a remote registry for us: https://github.com/buildpacks/lifecycle/blob/5be3695ca4f67a7b512b1962407dd283146abce3/cmd/lifecycle/exporter.go#L176-L191.
The latter is the desired end result of course; however, this would break a lot of Concourse flows since we lose the ability to track resource versioning via explicit resources that we put
to perform the upload.
I would imagine other [CI] users would like to have the option to also just simply export to a tarball. Do you have any suggestions for handling this then? Or should I try something else that is to the same effect as "executing a builder image directly"?
Can also open an issue about this potential feature request in the lifecycle
repo too if you'd like.
Not only that, but eventually not requiring Docker could also help improve the lifecycle in terms of build speed, artifacts caching, rebase, ....
@jspawar I think we would welcome a contribution to the lifecycle that allows exporting an OCI image to tar format on disk. You could simulate this right now by spinning up a local registry in the container and pulling the image to disk, but I agree that it would be a nice feature when you're using the builder directly in concourse / other CI.
Just FYI, we've made the workflow you're describing much easier recently with the lifecycle creator
binary, which runs through all the steps automatically without needed the ephemeral data files.
@jspawar kpack is a Docker-less CNB platform for k8s.
@jorgemoralespou The lifecycle already runs efficiently without Docker on platforms that natively provide a container runtime. But like I said, I think we'd be happy to merge support for podman, etc. to support VM-based CI / Linux workstation use cases. 😄
@sclevine a sequence diagram with API calls employed in building an image would help to estimate the effort required to add podman
support. Instead of waiting for full Docker API compatibility layer landed in podman
.
CC: @jromero
All, FYI, we have an issue open on the Podman side. I just tested with the latest version of Podman in Fedora (podman 2.1.1) and we still have the lack of an archive method blocking us. But, I wanted to say that this is on our radar and building up and stabilizing the docker compatible interface is high on our priority list. I can't commit to a timeline, but I'm investigating adding Pack CLI to RHEL 8/9 so we'll be doing more research over the coming months. @jorgemoralespou thanks for submitting this issue. We are interested from our side.
Given that this issue was a little broad to begin with, I'm going to close it in favor of what did come out of it. Pack now supports podman via the docker socket interface. Any alternative to Docker that supports the docker socket interface should also work.
Maybe others also come along here looking for a solution to the initially mentioned
There's many users that are starting to not have Docker installed on their systems because there are other alternatives that let's them create containers in a secure way as they typically run these containers on remote systems (e.g. kubernetes clusters). [...] Pack, although not depending on docker build [...] does require Docker to be running on your container.
We have a GitLab CI connected to a EKS / K8s cluster with Kubernetes executors/runners, where we don't have docker
inside the build pods/containers - nor want to mount the docker socket /var/run/docker.sock
oder use Docker-in-Docker (dind) approach for security reasons. We desparately searched for a solution, but only ever found the quote from this comment in mind:
If you're looking to build images in CI (not locally), I'd encourage you to use the lifecycle directly for that, so that you don't need Docker. Here's an example: https://github.com/tektoncd/catalog/blob/master/buildpacks/buildpacks-v3.yaml
So here's our interpretation/solution to the problem simply using the "lifecycle directly" (here's the full story on stackoverflow) in our .gitlab-ci.yml
(should work quite similar on other CI systems):
image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest
Hope this is of help 😃
An awesome writeup at SO. Deserves to be a blog post.
Great idea, will write one 😉 Done: https://blog.codecentric.de/en/2021/10/gitlab-ci-paketo-buildpacks/
great blog @jonashackt ! i implemented exaclty as you describe it, but sadly i am getting this with my spring boot application:
ERROR: failed to launch: determine start command: process type web was not found
your solution @jonashackt works really nice. It gets a bit more tricky when you need to pass maven build arguments. I managed to add the maven arguments like this:
`- echo "-Dmaven.test.skip=true --no-transfer-progress package spring-boot:repackage" >> platform/env/BP_MAVEN_BUILD_ARGUMENTS`
@jonashackt hey, thanks a lot for the solution. is there a way to pass BP env variables to the build?
@jonashackt hey, thanks a lot for the solution. is there a way to pass BP env variables to the build?
Did you see my reply? I posted how to pass an env, but i have to tell you i am afraid it doesnt work with all the variables
Description
There's many users that are starting to not have Docker installed on their systems because there are other alternatives that let's them create containers in a secure way as they typically run these containers on remote systems (e.g. kubernetes clusters). Some of such alternatives are:
Pack, although not depending on
docker build
per this comment it does require Docker to be running on your container.When you want to run pack as part of your CI/CD process or any other requisite (learning purposes) you might run it in a container on a Kubernetes platform and in order to run you will need to expose the Docker socket on the host machine and making the whole platform insecure.
Building containers should be a secure process that does not compromise your system in any possible way.
Proposed solution
Provide a mechanism to replace or have an alternative to using Docker to Build images.
Describe alternatives you've considered
Using kpack on the platform can be an alternative although AFAIK can have the same considerations on security (or lack of security).