Closed grantral closed 2 years ago
@TomSweeneyRedHat PTAL
@grantral please correct me if necessary, but this appears to be part of the Docker buildkit functionality. @rhatdan is this something we should try adding at this point to handle this particular scenario, or should we include it in whatever work would be necessary to provide the buildkit functionality. Note, I'm practically illiterate as far as buildkit goes, I don't know a lot about it.
@TomSweeneyRedHat
but this appears to be part of the Docker buildkit functionality
Yep.
Sure we could grab it, if anyone had time to work on it. It would be best if community could open PRs to add this feature.
A friendly reminder that this issue had no activity for 30 days.
A friendly reminder that this issue had no activity for 30 days.
A friendly reminder that this issue had no activity for 30 days.
A friendly reminder that this issue had no activity for 30 days.
@flouthoc PTAL
Thanks I'll take a look.
A friendly reminder that this issue had no activity for 30 days.
@flouthoc any progress?
Sorry was not able to take a look. I'll take a look in coming days.
I think we would need a design change for storing and processing stages. Afaik we don't have a easy way to identify indirect dependency of stages in a multi-stage build.
We would need to store and process stages in a DAG (directed acylic graph)
or some sort of dependency tree
. We could evaluate each stage in the DAG
concurrently and skip the ones which don't lead up to target whether its direct
or indirect
.
This is just my early proposal for this and i am think buildkit
does that same. I was not able to think of any simpler and efficient solution other than this. https://github.com/moby/moby/issues/32550
@vrothberg @nalind @rhatdan @giuseppe @mtrmac Any suggestions.
A friendly reminder that this issue had no activity for 30 days.
I'd describe this as a bug.
Parallelization is obviously a nice to have feature, but that's (probably) missing the point of this issue. At the very least, this difference in behavior is currently a blocker for us, to migrate from docker to buildah.
I imagine it would be complex to add support for parallel stages, but surely it wouldn't be particularly problematic to pre-compute the dependency graph, and omit unnecessary stages?
@flouthoc any updates on this? I'm entirely unfamiliar with the codebase, but I might have a crack implementing a (simple) fix, unless there's something in progress.
@joeycumines Sure !!! could you please share your approach.
@grantral Thanks, this will be out in next buildah release.
This is not in https://github.com/containers/buildah/releases/tag/v1.26.2 so I assume it will be in the next minor release? v1.27.0?
@lucacome Yes buildah 1.26.2
does not contains this feature, following feature should be supported in v1.27.0
. I think there should be a plan to release it soon but @rhatdan @TomSweeneyRedHat Could confirm this better.
Yes this will be released in the next couple of weeks. By August definitely. Podman rc1 went out this week. We will cut a release of Buildah as soon as we successfully do vendor dance and merge buildah into Podman.
I've installed podman
4.2.0 which is supposed to include buildah
1.27.0 with the changes from this issue, but the behavior is still the same, am I missing something? Should I open a new issue?
@lucacome Works fine for me, see first stage is skipped entirely in the build. Please confirm if you are using the right version, could you share Containerfile and what do you expect to see in build output ?
[root@fedora bin]# cat Dockerfile
FROM alpine
RUN echo hello
FROM alpine
RUN echo world
[root@fedora bin]# ./podman build --no-cache -t test .
[2/2] STEP 1/2: FROM alpine
[2/2] STEP 2/2: RUN echo world
world
[2/2] COMMIT test
--> 771f01f08fa
Successfully tagged localhost/test:latest
771f01f08fa20cfd1359558121eafe598541f14264c5d5700866c8587e473fc0
[root@fedora bin]# ./podman version
Client: Podman Engine
Version: 4.2.0
API Version: 4.2.0
Go Version: go1.18.3
Git Commit: 7fe5a419cfd2880df2028ad3d7fd9378a88a04f4
Built: Fri Aug 12 09:09:37 2022
OS/Arch: linux/amd64
[root@fedora bin]#
Can we move this code behind a flag. We have use case where we want to build images so that they are available for manifest/deployment. But buildah now skips the unused target images and breaking our builds.
@SaurabhAhuja1983 Sure, This was discussed somewhere before as well. would --skip-unused-stage=false
work for you ? Could you create a new issue for this ?
Created new issue https://github.com/containers/buildah/issues/4243 Thank you @flouthoc for quickly looking into it and i would appreciate if it can be fixed on priority.
Description
docker/cli#1134
Steps to reproduce the issue:
buildah bud --target backend
Describe the results you received:
Describe the results you expected: docker/cli#1134 (comment)
Output of
rpm -q buildah
orapt list buildah
:Output of
buildah version
:Output of
podman version
if reporting apodman build
issue:*Output of `cat /etc/release`:**
Output of
uname -a
:Output of
cat /etc/containers/storage.conf
:Output of
cat Dockerfile
: